Multi-tool mount enables 1 UR e-Series cobot to use 3 grippers


New Scale Robotics (NSR) has announced the first multi-tool mounting system for Universal Robots (UR) e-Series cobots. The NSR-MTM-3-URe Multi-Tool Mount (NSR-MTM) allows users to mount and control up to three grippers or other tools on one UR e-Series cobot.

The NSR-MTM System allows users to quickly set up and perform multiple processes with one robot. Benefits include:

  • Higher throughput in small part handling and inspection tasks.
  • Fewer large moves, for reduced cycle times without compromising safety.
  • The ability to automate more processes with fewer robots.

The NSR-MTM System includes both hardware and software. With low mass and small size, the hardware is compatible with UR’s smallest UR3e cobot. The integrated Freedrive button enables one-handed positioning to teach and set positions. The software enables fast setup and easy programming of up to three devices in one user interface on the UR teach pendant.

New Scale Robotics NSR-MTM-3-URe Multi-Tool Mount. | Credit: New Scale Robotics

NSR Devices Software is a new URCaps plugin for UR e-Series robots. Users can quickly add multiple tools to the single user interface, build tool processes with a few clicks, and run complex automation or inline inspection programs with ease. Other smart features include dynamic adjustment of tool center of gravity depending on mass of object picked.

The MTM hardware mounts to the UR robot tool flange with four screws and one connector to the internal tool port. Up to three tools are mounted on the MTM faces using the standard ISO 9409-1-50-4-M6 interfaces and M8 round connector. The MTM mount uses power and RS485 signals from the robot’s internal cables and slip rings. No external cables are required.

Applications include small part picking, movement, measurement, data logging, sorting and assembly. Used with the small and precise grippers from New Scale Robotics, the NSR-MTM System allows increased productivity from multiple tools while still fitting in the tightest spaces.

NSR Devices Software is a new URCaps plugin for UR e-Series robots. | Credit: New Scale Robotics

The post Multi-tool mount enables 1 UR e-Series cobot to use 3 grippers appeared first on The Robot Report.

Electronic skin could give robots an exceptional sense of touch


electronic skin

The National University of Singapore developed the Asynchronous Coded Electronic Skin, an artificial nervous system that could give robots an exceptional sense of touch. | Credit: National University of Singapore.

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by researchers at the National University of Singapore (NUS).

The new electronic skin system has ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from NUS Materials Science and Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hopes of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, unlike the nerve bundles in the human skin, it is made up of a network of sensors connected via a single electrical conductor.. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology, N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

Related: Challenges of building haptic feedback for surgical robots

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contact between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

The ACES developed by Asst. Professor Tee (left) and his team responds 1000 times faster than the human sensory nervous system. | Credit: National University of Singapore

Smart electronic skins for robots and prosthetics

ACES has a simple wiring system and remarkable responsiveness even with increasing numbers of sensors. These key characteristics will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

<strong>Related:</strong> <a href=”https://www.therobotreport.com/university-of-texas-austin-patent-gives-robots-ultra-sensitive-skin/”>UT Austin Patent Gives Robots Ultra-Sensitive Skin</a>

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

Editor’s Note: This article was republished from the National University of Singapore.

The post Electronic skin could give robots an exceptional sense of touch appeared first on The Robot Report.

Artificial muscles based on MIT fibers could make robots more responsive

Artificial muscles from MIT achieve powerful pulling force

Artificial muscles based on powerful fiber contractions could advance robotics and prosthetics. Credit: Felice Frankel

CAMBRIDGE, Mass. — As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at the Massachusetts Institute of Technology have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan. The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio. They have used a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

artificial muscle fiber at MIT

Credit: Courtesy of the researchers, MIT

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce.

Artificial muscles surprise

But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalled.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length.

In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva said.

One of the reasons for that longevity, she said, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

artificial muscle fiber test

Credit: Courtesy of the researchers, MIT

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Potential applications

Such artificial muscle fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics.

Credit: Courtesy of the researchers, MIT

“Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva said. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers.

Through the fiber-drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç said bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik said that the possibilities for materials of this type are virtually limitless, because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He added that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he said.

The work was supported by the National Institute of Neurological Disorders and Stroke and the National Science Foundation.

Editor’s note: This article republished with permission from MIT News. 

The post Artificial muscles based on MIT fibers could make robots more responsive appeared first on The Robot Report.

Universal Robots launching 50 authorized training centers


Universal Robots is opening 50 Authorized Training Centers, 13 of which will be located in North America. | Credit: Universal Robots

Universal Robots (UR) is launching Authorized Training Centers (ATCs) that offer classes spanning basic to advanced programming of UR cobots. UR is planning 50 fully authorized ATCs worldwide, 13 of which will be in North America. The first few ATCs in the U.S. have already been authorized and are now offered by the following UR sales partners:

  • Advanced Control Solutions in Marietta, Ga.
  • HTE Technologies in St. Louis, Missouri and Lenexa, Kansas
  • Ralph W. Earl Company in Syracuse, New York
  • Applied Controls in Malvern, Pa.

In addition to the ATCs hosted by UR partners, four training centers are also opening at UR’s offices in Ann Arbor, MI, Irving, TX, Garden City, NY, and Irvine, CA.

UR’s certified trainers will conduct training modules that cover a range of core and advanced cobot programming skills, including cobot scripting, industrial communication, and interface usage. Small class sizes with student-centered objectives and hands-on practice with UR robots ensure that participants come away with valuable skills they can apply immediately in their workplace.

For class schedules and more information, visit the UR Academy site. The modules of the ATC program include:

Core: For any user of a UR cobot who has completed the online modules. Covers safety set-up, basic applications and flexible redeployment.

Advanced: For cobot users, technical sales people, and integrators with a practical need to optimize applications or find new ways of deploying UR cobots. Covers scripting, advanced uses of force control and TCP, conveyor tracking and performance review.

Industrial Communication: For users and developers who need to integrate cobots with third-party devices. Covers modbus TCP, FTP server, dashboard server, socket communication, Ethernet/IP and Profinet.

Interfaces: For users and developers who need in-depth knowledge on how to interface with UR cobots using script interfaces. Covers UR scripting, socket communication, client interfaces (port 30001-30003), real time data exchange and XML/RPC.

Service & Troubleshooting: For users, technicians, and engineers wanting/needing a better understanding of the mechanical hardware used by UR cobots, how to diagnose issues and resolve them. Covers the configuration of the cobot arm, controller, and safety system as well as preventative maintenance, system troubleshooting, and replacement of parts.

UR’s certified trainers will conduct training modules that cover a range of core and advanced cobot programming skills, including cobot scripting, industrial communication, and interface usage. | Credit: Universal Robots

“Now, current and potential customers can get in-person training, customizing their specific applications and needs,” said Stuart Shepherd, Regional sales director of Universal Robots’ Americas division. “Not only are our partners excited about this opportunity, they’re virtually lining up to be the next rollout.”

“From a business perspective, being able to offer this type of training also improves our place in the market, ensuring that current and potential customers start to rely on us as automation experts,” said Cale Harbour, Vice President of Product Marketing at Advanced Control Solutions. “As our customers build their knowledge, they can deploy the technology faster and see the benefits to their production – and their bottom line – quicker. It’s a win-win for everybody involved.”

“Using this approach, we’ve expanded our role as supplier to assist with the application process as well,” said Marv Dixon, vice president of business development and sales, HTE Technologies. “The Training Center has also provided us with the perfect scenario in which we can introduce other products that our customers might not have otherwise considered, such as grippers and conveyors. With the Authorized Training Center distinction, we’ve become a resource that our customers can count on for up-to-date, accessible training and support.”

The post Universal Robots launching 50 authorized training centers appeared first on The Robot Report.

Automated system from MIT generates robotic actuators for novel tasks

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications.

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. Credit: Subramanian Sundaram

CAMBRIDGE, Mass. — An automated system developed by researchers at the Massachusetts Institute of Technology designs and 3D prints complex robotic actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published in Science Advances, the researchers demonstrated the system by fabricating actuators that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. When it’s activated, it tilts at an angle and displays the famous Edvard Munch painting “The Scream.”

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials.

Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” said first author Subramanian Sundaram, Ph.D. ’18, a former graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

New robotic actuators mimic biology for efficiency

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming.

“You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram said.

Joining Sundaram on the paper were Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the ‘combinatorial explosion’

Robotic actuators are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high

“What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram said.

The researchers first customized three polymer materials with specific properties they needed to build their robotic actuators: color, magnetization, and rigidity. They ultimately produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed.

Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

MIT robotic actuator

Credit: Subramanian Sundaram

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels.

Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone.

The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram said. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics.

“We’re not yet able to print wings or anything on that scale, or with those materials,” said Sundaram. “But I think this is a first step toward that goal.”

Editor’s note: This article republished with permission from MIT News.

The post Automated system from MIT generates robotic actuators for novel tasks appeared first on The Robot Report.

R-Series actuator from Hebi Robotics is ready for outdoor rigors

PITTSBURGH — What do both summer vacationers and field robots need to do? Get into the water. Hebi Robotics this week announced the availability of its R-Series actuators, which it said can enable engineers “to quickly create custom robots that can be deployed directly in wet, dirty, or outdoor environments.”

Hebi Robotics was founded in 2014 by Carnegie Mellon University professor and robotics pioneer Howie Choset. It makes hardware and software for developers to build robots for their specific applications. It also offers custom development services to make robots “simple, useful, and safe.”

Hebi’s team includes experts in robotics, particularly in motion control. The company has developed robotics tools for academic, aerospace military, sewer inspection, and spaceflight users.

Robots can get wet and dirty with R-Series actuators

The R-Series actuator is built on Hebi’s X-Series platform. It is sealed to IP678 and is designed to be lightweight, compact, and energy-efficient. The series includes three models, the R8-3, which has continuous torque of 3 N-m and weighs 670g; the RB-9, which has continuous torque of 8 N-m and weighs 685g; and the R8-16, which has continuous torque of 16 N-m and weighs 715g.

Hebi's R-Series actuator

The R-Series actuator is sealed for wet and dirty environments. Source: Hebi Robotics

The actuators also include sensors that Hebi said “enable simultaneous control of position, velocity, and torque, as well as three-axis inertial measurement.”

In addition, the R-Series integrates a brushless motor, gear reduction, force sensing, encoders, and controls in a compact package, said Hebi. The actuators can run on 24-48V DC, include internal pressure sensors, and communicate via 100Mbps Ethernet.

On the software side, the R-Series has application programming interfaces (APIs) for MATLAB, the Robot Operating System (ROS), Python, C and C++, and C#, as well as support for Windows, Linux, and OS X.

According to Hebi Robotics, the R-Series actuators will be available this autumn, and it is accepting pre-orders at 10% off the list prices. The actuator costs $4,500, and kits range from $20,000 to $36,170, depending on the number of degrees of freedom of the robotic arm. Customers should inquire about pricing for the hexapod kit.

The post R-Series actuator from Hebi Robotics is ready for outdoor rigors appeared first on The Robot Report.

Cassie bipedal robot a platform for tackling locomotion challenges

Working in the Dynamic Autonomy and Intelligent Robotics lab at the University of Pennsylvania, Michael Posa (right) and graduate student Yu-Ming Chen use Cassie to help develop better algorithms that can help robots move more like people. | Credit: Eric Sucar

What has two legs, no torso, and hangs out in the basement of the University of Pennsylvania’s Towne Building?

It’s Cassie, a dynamic bipedal robot, a recent addition to Michael Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab. Built by Agility Robotics, a company in Albany, Oregon, Cassie offers Posa and his students the chance to create and test the locomotion algorithms they’re developing on a piece of equipment that’s just as cutting-edge as their ideas.

“We’re really excited to have it. It offers us capabilities that are really unlike anything else on the commercial market,” says Posa, a mechanical engineer in the School of Engineering and Applied Science. “There aren’t many options that exist, and this means that every single lab that wants to do walking research doesn’t have to spend three years building its own robot.”

Having Cassie lets Posa’s lab members spend all their time working to solve the huge challenge of designing algorithms so that robots can walk and navigate across all kinds of terrain and circumstances.

“What we have is a system really designed for dynamic locomotion,” he says. “We get very natural speed in terms of leg motions, like picking up a foot and putting it down somewhere else. For us, it’s a really great system.”

“It offers us capabilities that are really unlike anything else on the commercial market,” Posa says about Cassie. | Credit: Eric Sucar

Why do the legs matter? Because they dramatically expand the possibilities of what a robot can do. “You can imagine how legged robots have a key advantage over wheeled robots in that they are able to go into unstructured environments. They can go over relatively rough terrain, into houses, up a flight of stairs. That’s where a legged robot excels,” Posa says. “This is useful in all kinds of applications, including basic exploration, but also things like disaster recovery and inspection tasks. That’s what’s drawing a lot of industry attention these days.”

Of course, walking over different terrain or up a curb, step, or other incline dramatically increases what a robot has to do to stay upright. Consider what happens when you walk: Bump into something with your elbow, and your body has to reverse itself to avoid knocking it over, as well as stabilize itself to avoid falling in the opposite direction.

Related: Ford package delivery tests combine autonomous vehicles, bipedal robots

A robot has to be told to do all of that – which is where Posa’s algorithms come in, starting from where Cassie’s feet go down as it takes each step.

“Even with just legs, you have to make all these decisions about where you’re going to put your feet,” he says. “It’s one of those decisions that’s really very difficult to handle because everything depends on where and when you’re going to put your feet down and putting that foot down crates an impact: You shift your weight, which changes your balance, and so on.



“This is a discrete event that happens quickly. From a computational standpoint, that’s one of the things we really struggle with—how do we handle these contact events?”

Then there’s the issue of how to model what you want to tell the robot to do. Simple modeling considers the robot as a point moving in space rather than, for example, a machine with six joints in its leg. But of course, the robot isn’t a point, and working with those models means sacrificing capability. Posa’s lab is trying to build more sophisticated models that, in turn, make the robot move more smoothly.

“We’re interested in the sort of middle ground, this Goldilocks regime between ‘this robot has 12 different motors’ and ‘this robot is a point in space,'” he says.

Related: 2019 the Year of Legged Robots

Cassie’s predecessor was called ATRIAS, an acronym for “assume the robot is a sphere.” ATRIAS allowed for more sophisticated models and more ability to command the robot, but was still too simple, Posa says. “The real robot is always different than a point or sphere. The question is where should our models live on this spectrum, from very simple to very complicated?”

Two graduate students in the DAIR Lab have been working on the algorithms, testing them in simulation and then, finally, on Cassie. Most of the work is virtual, since Cassie is really for testing the pieces that pass the simulation test.

“You write the code there,” says Posa, gesturing at a computer across the lab, “and then you flip a switch and you’re running it with the real robot. In general, if it doesn’t work in the simulator, it’s not going to work in the real world.”

Graduate students, including Chen (left), work on designing new algorithms and running computer simulations before testing them on Cassie. | Credit: Eric Sucar

On the computer, the researchers can take more risks, says graduate student Yu-Ming Chen. “We don’t break the robot in simulation,” he says, chuckling.

So what happens when you take these legs for a spin? The basic operation involves a marching type of step, as Cassie’s metal feet clang against the floor. But even as the robot makes these simple motions, it’s easy to see how the joints and parts work together to make a realistic-looking facsimile of a legged body from the waist down.

With Cassie as a platform, Posa says he’s excited to see how his team can push locomotion research forward.

“We want to design algorithms to enable robots to interact with the world in a safe and productive fashion,” he says. “We want [the robot] to walk in a way that is efficient, energetically, so it can travel long distances, and walk in a way that’s safe for both the robot and the environment.”

Editor’s Note: This article was republished from the University of Pennsylvania.

MIT ‘walking motor’ could help robots assemble complex structures


Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

New approach to building robots

Their work offers an alternative to today’s approaches to constructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Robots working in confined spaces

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”


Editor’s Note: This article was republished from MIT News.


The post MIT ‘walking motor’ could help robots assemble complex structures appeared first on The Robot Report.

Waymo self-driving cars OK’d to carry passengers in California


waymo

Waymo’s self-driving cars can now carry passengers in California. | Credit: Waymo

Waymo has been testing its self-driving cars in California for years. Now Alphabet’s self-driving car division has been granted a permit to carry passengers in the Golden State. Waymo is now part of California’s Autonomous Vehicle Passenger Service pilot program, joining Autox Technologies, Pony.ai and Zoox.

The permit, which was granted by the California Public Utilities Commission (CPUC) requires a Waymo safety operator to be behind the wheel at all times and doesn’t allow Waymo to charge riders. The permit is good for three years.

“The CPUC allows us to participate in their pilot program, giving Waymo employees the ability to hail our vehicles and bring guests on rides within our South Bay territory,” Waymo said in a statement. “This is the next step in our path to eventually expand and offer more Californians opportunities to access our self-driving technology, just as we have gradually done with Waymo One in Metro Phoenix.”

Waymo also received an exemption from the CPUC that allows it to use a third-party company to contract out safety operators. Waymo said all safety operators go through a proprietary driver training program. In a letter requesting the exemption, Waymo said that while its “team of test drivers will include some full-time Waymo employees, operating and scaling a meaningful pilot requires a large group of drivers who are more efficiently engaged through Waymo’s experienced and specialized third-party staffing providers.”

Waymo self-driving taxi service coming to California?

Of course, this permit opens the door for Waymo to eventually offer an autonomous taxi service in California. But a Waymo spokesperson said there was no timetable for rolling out a self-driving taxi-like service in California. For now, the Waymo service will be limited to its employees and their guests in the Silicon Valley area.

Waymo One, a commercial self-driving service, launched in December 2018 in Phoenix, Ariz. It has been offering rides to more than 400 volunteer testers. Waymo recently announced a partnership with Lyft. It will deploy 10 autonomous vehicles in the coming months that would be available through the Lyft app. There will be safety drivers behind the wheel in this partnership, too.

Calif. Autonomous Vehicle Disengagements 2018

CompanyDisengagements per 1000 miles (2018)Miles per Disengagement (2018)Miles Driven (2018)Miles per disengagement (2017)
Waymo0.0911,0171,271,5875,595.95
GM Cruise0.195,204.9447,6211,254.06
Zoox0.521,922.830,764282.96
Nuro0.971,028.324,680--
Pony.ai0.981,022.316,356--
Nissan4.75210.55,473208.36
Baidu4.86205.618,09341.06
AIMotive4.96201.63,428--
AutoX5.24190.822,710--
Roadstar.AI5.70175.37,539--
WeRide/JingChi5.71173.515,440.80--
Aurora10.0199.932,858--
Drive.ai11.91 83.94,616.6943.59
PlusAI18.4054.410,816--
Nullmax22.4044.63,036--
Phantom AI48.2020.74,149--
NVIDIA49.7320.14,1424.63
SF Motors90.56112,561--
Telenav166.676.03032
BMW219.514.641--
CarOne/Udelv260.27 3.8219--
Toyota393.702.5381--
Qualcomm416.632.4240.02--
Honda458.332.2168--
Mercedes Benz682.521.51,749.391.29
SAIC829.611.2634.03--
Apple871.651.179,745--
Uber2608.460.426,899--

Waymo’s track record in California

According to the California Department of Motor Vehicles (DMV), Waymo had the best-performing autonomous vehicles in the state for the second consecutive year. Some have said the DMV’s tracking method is too vague and has allowed companies to avoid reporting certain events.

Nonetheless, Waymo’s self-driving cars experienced one disengagement every 11,017 miles. That performance marks a 50 percent reduction in the rate and a 96 percent increase in the average miles traveled between disengagements compared to the 2017 numbers. In 2016, Waymo had one disengagement every 5,128 miles. Waymo also drove significantly more miles, up from 352,000 miles in 2017 to 1.2 million miles in 2018, which makes the performance even more impressive.

Waymo is also working on autonomous trucks. Waymo has hired 13 former employees from Anki, the once-popular consumer robotics company that closed down. Anki Co-Founder and CEO Boris Sofman was hired as Director of Engineering, Head of Trucking, Waymo.

The post Waymo self-driving cars OK’d to carry passengers in California appeared first on The Robot Report.