Programmable soft actuators show potential of soft robotics at TU Delft

Researchers at the Delft University of Technology in the Netherlands have developed highly programmable soft actuators that, similar to the human hand, combine soft and hard materials to perform complex movements. These materials have great potential for soft robots that can safely and effectively interact with humans and other delicate objects, said the TU Delft scientists.

“Robots are usually big and heavy. But you also want robots that can act delicately, for instance, when handling soft tissue inside the human body. The field that studies this issue, soft robotics, is now really taking off,” said Prof. Amir Zadpoor, who supervised the research presented the July 8 issue of Materials Horizons.

“What you really want is something resembling the features of the human hand including soft touch, quick yet accurate movements, and power,” he said. “And that’s what our soft 3D-printed programmable materials strive to achieve.”

Tunability

Owing to their soft touch, soft robotics can safely and effectively interact with humans and other delicate objects. Soft programmable mechanisms are required to power this new generation of robots. Flexible mechanical metamaterials, working on the basis of mechanical instability, offer unprecedented functionalities programmed into their architected fabric that make them potentially very promising as soft mechanisms, said the TU Delft researchers.

“However, the tunability of the mechanical metamaterials proposed so far have been very limited,” said first author Shahram Janbaz.

Programmable soft actuators

“We now present some new designs of ultra-programmable mechanical metamaterials, where not only the actuation force and amplitude, but also the actuation mode could be selected and tuned within a very wide range,” explained Janbaz. “We also demonstrate some examples of how these soft actuators could be used in robotics, for instance as a force switch, kinematic controllers, and a pick-and-place end-effector.”

Soft actuators from TU Delft

A conventional robotic arm is modified using the developed soft actuators to provide soft touch during pick-and-place tasks. Source: TU Delft

Buckling

“The function is already incorporated in the material,” Zadpoor explained. “Therefore, we had to look deeper at the phenomenon of buckling. This was once considered the epitome of design failure, but has been harnessed during the last few years to develop mechanical metamaterials with advanced functionalities.”

“Soft robotics in general and soft actuators in particular could greatly benefit from such designer materials,” he added. “Unlocking the great potential of buckling-driven materials is, however, contingent on resolving the main limitation of the designs presented to date, namely the limited range of their programmability. We were able to calculate and predict higher modes of buckling and make the material predisposed to these higher modes.”

3D printing

“So, we present multi-material buckling-driven metamaterials with high levels of programmability,” said Janbaz. “We combined rational design approaches based on predictive computational models with advanced multi-material additive manufacturing techniques to 3D print cellular materials with arbitrary distributions of soft and hard materials in the central and corner parts of their unit cells.”

“Using the geometry and spatial distribution of material properties as the main design parameters, we developed soft mechanical metamaterials behaving as mechanisms whose actuation force and actuation amplitude could be adjusted,” he said.

Editor’s note: This article republished from TU Delft.

The post Programmable soft actuators show potential of soft robotics at TU Delft appeared first on The Robot Report.

KIST researchers teach robot to trap a ball without coding

KIST teaching

KIST’s research shows that robots can be intuitively taught to be flexible by humans rather than through numerical calculation or programming the robot’s movements. Credit: KIST

The Center for Intelligent & Interactive Robotics at the Korea Institute of Science and Technology, or KIST, said that a team led by Dr. Kee-hoon Kim has developed a way of teaching “impedance-controlled robots” through human demonstrations. It uses surface electromyograms of muscles and succeeded in teaching a robot to trap a dropped ball like a soccer player.

A surface electromyogram (sEMG) is an electric signal produced during muscle activation that can be picked up on the surface of the skin, said KIST, which is led by Pres. Byung-gwon Lee.

Recently developed impedance-controlled robots have opened up a new era of robotics based on the natural elasticity of human muscles and joints, which conventional rigid robots lack. Robots with flexible joints are expected to be able to run, jump hurdles and play sports like humans. However, the technology required to teach such robots to move in this manner has been unavailable until recently.

KIST uses human muscle signals to teach robots how to move

The KIST research team claimed to be the first in the world to develop a way of teaching new movements to impedance-controlled robots using human muscle signals. With this technology, which detects not only human movements but also muscle contractions through sEMG, it’s possible for robots to imitate movements based on human demonstrations.

Dr. Kee-hoon Kim’s team said it succeeded in using sEMG to teach a robot to quickly and adroitly trap a rapidly falling ball before it comes into contact with a solid surface or bounces too far to reach — similar to the skills employed by soccer players.

SEMG sensors were attached to a man’s arm, allowing him to simultaneously control the location and flexibility of the robot’s rapid upward and downward movements. The man then “taught” the robot how to trap a rapidly falling ball by giving a personal demonstration. After learning the movement, the robot was able to skillfully trap a dropped ball without any external assistance.

KIST movements

sEMG sensors attached to a man’s arm, allowed him to control the location and flexibility of a robot’s rapid movements. Source: KIST

This research outcome, which shows that robots can be intuitively taught to be flexible by humans, has attracted much attention, as it was not accomplished through numerical calculation or programming of the robot’s movements. This study is expected to help advance the study of interactions between humans and robots, bringing us one step closer to a world in which robots are an integral part of our daily lives.

Kim said, “The outcome of this research, which focuses on teaching human skills to robots, is an important achievement in the study of interactions between humans and robots.”

Robots may care for you in old age—and your children will teach them

It's likely that before too long, robots will be in the home to care for older people and help them live independently. To do that, they'll need to learn how to do all the little jobs that we might be able to do without thinking. Many modern AI systems are trained to perform specific tasks by analysing thousands of annotated images of the action being performed. While these techniques are helping to solve increasingly complex problems, they still focus on very specific tasks and require lots of time and processing power to train.

Robots can play key roles in repairing our infrastructure


Pipeline inspection robot

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various another type of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waster management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive testing[ products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this is could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for the Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

Argo AI, CMU developing autonomous vehicle research center


Argo AI

Argo AI autonomous vehicle. | Credit: Argo AI

Argo AI, a Pittsburgh-based autonomous vehicle company, has donated $15 million to Carnegie Mellon University (CMU) to fund a new research center. The Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research will “pursue advanced research projects to help overcome hurdles to enabling self-driving vehicles to operate in a wide variety of real-world conditions, such as winter weather or construction zones.”

Argo was founded in 2016 by a team with ties to CMU (more on that later). The five-year partnership between Argo and CMU will fund research into advanced perception and next-generation decision-making algorithms for autonomous vehicles. The center’s research will address a number of technical topics, including smart sensor fusion, 3D scene understanding, urban scene simulation, map-based perception, imitation and reinforcement learning, behavioral prediction and robust validation of software.

“We are thrilled to deepen our partnership with Argo AI to shape the future of self-driving technologies,” CMU President Farnam Jahanian said. “This investment allows our researchers to continue to lead at the nexus of technology and society, and to solve society’s most pressing problems.”

In February 2017, Ford announced that it was investing $1 billion over five years in Argo, combining Ford’s autonomous vehicle development expertise with Argo AI’s robotics experience. Earlier this month, Argo unveiled its third-generation test vehicle, a modified Ford Fusion Hybrid. Argo is now testing its autonomous vehicles in Detroit, Miami, Palo Alto, and Washington, DC.

Argo last week released its HD maps dataset, Argoverse. Argo said this will help the research community “compare the performance of different (machine learning – deep net) approaches to solve the same problem.



“Argo AI, Pittsburgh and the entire autonomous vehicle industry have benefited from Carnegie Mellon’s leadership. It’s an honor to support development of the next-generation of leaders and help unlock the full potential of autonomous vehicle technology,” said Bryan Salesky, CEO and co-founder of Argo AI. “CMU and now Argo AI are two big reasons why Pittsburgh will remain the center of the universe for self-driving technology.”

Deva Ramanan, an associate professor in the CMU Robotics Institute, who also serves as machine learning lead at Argo AI, will be the center’s principal investigator. The center’s research will involve faculty members and students from across CMU. The center will give students access to the fleet-scale data sets, vehicles and large-scale infrastructure that are crucial for advancing self-driving technologies and that otherwise would be difficult to obtain.

CMU’s other autonomous vehicle partnerships

This isn’t the first autonomous vehicle company to see potential in CMU. In addition to Argo AI, CMU performs related research supported by General Motors, Uber and other transportation companies.

Its partnership with Uber is perhaps CMU’s most high-profile autonomous vehicle partnership, and it’s for all the wrong reasons. In 2015, Uber announced a strategic partnership with CMU that included the creation of a research lab near campus aimed at kick starting autonomous vehicle development.

But that relationship ended up gutting CMU’s National Robotics Engineering Center (NREC). More than a dozen CMU researchers, including the NREC’s director, left to work at the Uber Advanced Technologies Center.


Argo’s connection to CMU

As mentioned earlier, Argo’s co-founders have strong ties to CMU. Argo Co-founder and president Peter Rander earned his masters and PhD degrees at CMU. Salesky graduated from the University of Pittsburgh in 2002, but worked at the NREC for a number of years, managing a portfolio of the center’s largest commercial programs that included autonomous mining trucks for Caterpillar. In 2007, Salesky led software engineering for Tartan Racing, CMU’s winning entry in the DARPA Urban Challenge.

Salesky departed NREC and joined the Google self-driving car team in 2011 to continue the push toward making self-driving cars a reality. While at Google, Bryan he responsible for the development and manufacture of their hardware portfolio, which included self-driving sensors, computers and several vehicle development programs.

Brett Browning, Argo’s VP of Robotics, received his Ph.D. (2000) and bachelor’s degree in electrical engineering and science from the University of Queensland. He was a senior faculty member at the NREC for 12-plus years, pursuing field robotics research in defense, oil and gas, mining and automotive applications.

Safe, low-cost, modular, self-programming robots

Many work processes would be almost unthinkable today without robots. But robots operating in manufacturing facilities have often posed risks to workers because they are not responsive enough to their surroundings. To make it easier for people and robots to work in close proximity in the future, Prof. Matthias Althoff of the Technical University of Munich (TUM) has developed a new system: IMPROV.

Elephant Robotics’ Catbot designed to be a smaller, easier to use cobot


Small and midsize enterprises are just beginning to benefit from collaborative robot arms or cobots, which are intended to be safer and easier to use than their industrial cousins. However, high costs and the difficulty of customization are still barriers to adoption. Elephant Robotics this week announced its Catbot, which it described as an “all in one safe robotic assistant.”

The cobot has six degrees of freedom, has a 600mm (23.6 in.) reach, and weighs 18kg (39.68 lb.). It has a payload capacity of 5kg (11 lb.). Elephant Robotics tested Catbot in accordance with international safety standards EN ISO 13848:2008 PL d and 10218-1: 2011-Clause 5.4.3 for human-machine interaction. A teach pendant and a power box are optional with Catbot.

Elephant Robotics CEO Joey Song studied in Australia. Upon returning home, he said, he “wanted to create a smaller in size robot that will be safe to operate and easy to program for any business owner with just a few keystrokes.”

Song founded Elephant Robotics in 2016 in Shenzhen, China, also known as “the Silicon Valley of Asia.” It joined the HAX incubator and received seed funding from Princeton, N.J.-based venture capital firm SOSV.

Song stated that he is committed in making human-robot collaboration accessible to any small business by eliminating the limitations of high price or requirements for highly skilled programming. Elephant Robotics also makes the Elephant and Panda series cobots for precise industrial automation.

Catbot includes voice controls

Repetitive tasks can lead to boredom, accidents, and poor productivity and quality, noted Elephant Robotics. Its cobots are intended to free human workers to be more creative. The company added that Catbot can save on costs and increase workloads.

Controlling robots, even collaborative robots, can be difficult. This is even harder for robots that need to be precise and safe. Elephant Robotics cited Facebook’s new PyRobot framework as an example of efforts to simplify robotic commands.

Catbot is built on an open platform so developers can share the skills they’ve developed, allowing others to use them or build on top of them.

Elephant Robotics claimed that it has made Catbot smarter and safer than other collaborative robots, offering “high efficiency and flexibility to various industries.” It includes force sensing and voice-command functions.

In addition, Catbot has an “all-in-one” design, cloud-based programming, and quick tool changing.

The catStore virtual shop offers a set of 20 basic skills. Elephant Robotics said that new skills could be developed for specific businesses, and they can be shared with other users on its open platform.

Elephant Robotics' Catbot designed to be a smaller, easier to use cobot

Catbot is designed to provide automated assistance to people in a variety of SMEs. Source: Elephant Robotics

Application areas

Elephant Robotics said its cobots are suitable for assembly, packaging, pick-and-place, and testing tasks, among others. Its arms work with a variety of end effectors. To increase its flexibility, the company said, Catbot is designed to be easy to program, from high-precision tasks to covering “hefty ground projects.”

According to Elephant Robotics, the Catbot can used for painting, photography, and giving massages. It could also be a personal barista or play with humans in a table game. In addition, Catbot could act as a helping hand in research workshops or as an automatic screwdriver, said the company.

Elephant Robotics’ site said it serves the agricultural and food, automotive, consumer electronics, educational and research, household device, and machining markets.

Catbot is available now for preorder, with deliveries set to start in August 2019. Contact Elephant Robotics for more information on price or tech specifications at sales@elephantrobotics.com.

TRI tackles manipulation research for reliable, robust human-assist robots

Wouldn’t it be amazing to have a robot in your home that could work with you to put away the groceries, fold the laundry, cook your dinner, do the dishes, and tidy up before the guests come over? For some of us, a robot assistant – a teammate – might only be a convenience.

But for others, including our growing population of older people, applications like this could be the difference between living at home or in an assisted care facility. Done right, we believe these robots will amplify and augment human capabilities, allowing us to enjoy longer, healthier lives.

Decades of prognostications about the future – largely driven by science fiction novels and popular entertainment – have encouraged public expectations that someday home robots will happen. Companies have been trying for years to deliver on such forecasts and figure out how to safely introduce ever more capable robots into the unstructured home environment.

Despite this age of tremendous technological progress, the robots we see in homes to date are primarily vacuum cleaners and toys. Most people don’t realize how far today’s best robots are from being able to do basic household tasks. When they see heavy use of robot arms in factories or impressive videos on YouTube showing what a robot can do, they might reasonably expect these robots could be used in the home now.

Bringing robots into the home

Why haven’t home robots materialized as quickly as some have come to expect? One big challenge is reliability. Consider:

  • If you had a robot that could load dishes into the dishwasher for you, what if it broke a dish once a week?
  • Or, what if your child brings home a “No. 1 DAD!” mug that she painted at the local art studio, and after dinner, the robot discards that mug into the trash because it didn’t recognize it as an actual mug?

A major barrier for bringing robots into the home are core unsolved problems in manipulation that prevent reliability. As I presented this week at the Robotics: Science and Systems conference, the Toyota Research Institute (TRI) is working on fundamental issues in robot manipulation to tackle these unsolved reliability challenges. We have been pursuing a unique combination of robotics capabilities focused on dexterous tasks in an unstructured environment.

Unlike the sterile, controlled and programmable environment of the factory, the home is a “wild west” – unstructured and diverse. We cannot expect lab tests to account for every different object that a robot will see in your home. This challenge is sometimes referred to as “open-world manipulation,” as a callout to “open-world” computer games.

Despite recent strides in artificial intelligence and machine learning, it is still very hard to engineer a system that can deal with the complexity of a home environment and guarantee that it will (almost) always work correctly.

TRI addresses the reliability gap

Above is a demonstration video showing how TRI is exploring the challenge of robustness that addresses the reliability gap. We are using a robot loading dishes in a dishwasher as an example task. Our goal is not to design a robot that loads the dishwasher, but rather we use this task as a means to develop the tools and algorithms that can in turn be applied in many different applications.

Our focus is not on hardware, which is why we are using a factory robot arm in this demonstration rather than designing one that would be more appropriate for the home kitchen.

The robot in our demonstration uses stereo cameras mounted around the sink and deep learning algorithms to perceive objects in the sink. There are many robots out there today that can pick up almost any object — random object clutter clearing has become a standard benchmark robotics challenge. In clutter clearing, the robot doesn’t require much understanding about an object — perceiving the basic geometry is enough.

For example, the algorithm doesn’t need to recognize if the object is a plush toy, a toothbrush, or a coffee mug. Given this, these systems are also relatively limited with what they can do with those objects; for the most part, they can only pick up the objects and drop them in another location only. In the robotics world, we sometimes refer to these robots as “pick and drop.”

Loading the dishwasher is actually significantly harder than what most roboticists are currently demonstrating, and it requires considerably more understanding about the objects. Not only does the robot have to recognize a mug or a plate or “clutter,” but it has to also understand the shape, position, and orientation of each object in order to place it accurately in the dishwasher.

TRI’s work in progress shows not only that this is possible, but that it can be done with robustness that allows the robot to continuously operate for hours without disruption.

Toyota Research Institute

Getting a grasp on household tasks

Our manipulation robot has a relatively simple hand — a two-fingered gripper. The hand can make relatively simple grasps on a mug, but its ability to pick up a plate is more subtle. Plates are large and may be stacked, so we have to execute a complex “contact-rich” maneuver that slides one gripper finger under and between plates in order to get a firm hold. This is a simple example of the type of dexterity that humans achieve easily, but that we rarely see in robust robotics applications.

Silverware can also be tricky — it is small and shiny, which makes it hard to see with a machine-learning camera. Plus, given that the robot hand is relatively large compared to the smaller sink, the robot occasionally needs to stop and nudge the silverware to the center of the sink in order to do the pick. Our system can also detect if an object is not a mug, plate or silverware and, labeling it as “clutter,” and move it to a “discard” bin.

Connecting all of these pieces is a sophisticated task planner, which is constantly deciding what task the robot should execute next. This task planner decides if it should pull out the bottom drawer of the dishwasher to load some plates, pull out the middle drawer for mugs, or pull out the top drawer for silverware.’

Like the other components, we have made it resilient — if the drawer gets suddenly closed when it was needed to be open, the robot will stop, put down the object on the counter top, and pull the drawer back out to try again. This response shows how different this capability is than a typical precision, repetitive factory robot, which are typically isolated from human contact and environmental randomness.

Related content:

Simulation key to success

The cornerstone of TRI’s approach is the use of simulation. Simulation gives us a principled way to engineer and test systems of this complexity with incredible task diversity and machine learning and artificial intelligence components. It allows us to understand what level of performance the robot will have in your home with your mugs, even though we haven’t been able to test in your kitchen during our development.

An exciting achievement is that we have made great strides in making simulation robust enough to handle the visual and mechanical complexity of this dishwasher loading task and on closing the “sim to real” gap. We are now able to design and test in simulation and have confidence that the results will transfer to the real robot. At long last, we have reached a point where we do nearly all of our development in simulation, which has traditionally not been the case for robotic manipulation research.

We can run many more tests in simulation and more diverse tests. We are constantly generating random scenarios that will test the individual components of the dish loading plus the end-to-end performance.

Let me give you a simple example of how this works. Consider the task of extracting a single mug from the sink.  We generate scenarios where we place the mug in all sorts of random configurations, testing to find “corner cases” — rare situations where our perception algorithms or grasping algorithms might fail. We can vary material properties and lighting conditions. We even have algorithms for generating random, but reasonable, shapes of the mug, generating everything from a small espresso cup to a portly cylindrical coffee mug.

We conduct simulation testing through the night, and every morning we receive a report that gives us new failure cases that we need to address.

Early on, those failures were relatively easy to find, and easy to fix. Sometimes they are failures of the simulator — something happened in the simulator that could never have happened in the real world — and sometimes they are problems in our perception or grasping algorithms. We have to fix all of these failures.

TRI robot

TRI is using an industrial robot for household tasks to test its algorithms. Source: TRI

As we continue down this road to robustness, the failures are getting more rare and more subtle. The algorithms that we use to find those failures also need to get more advanced. The search space is so huge, and the performance of the system so nuanced, that finding the corner cases efficiently becomes our core research challenge.

Although we are exploring this problem in the kitchen sink, the core ideas and algorithms are motivated by, and are applicable to, related problems such as verifying automated driving technologies.

‘Repairing’ algorithms

The next piece of our work focuses on the development of algorithms to automatically “repair” the perception algorithm or controller whenever we find a new failure case. Because we are using simulation, we can test our changes against not only this newly discovered scenario, but also make sure that our changes also work for all of the other scenarios that we’ve discovered in the preceding tests.

Of course, it’s not enough to fix this one test. We have to make sure we also do not break all of the other tests that passed before. It’s possible to imagine a not-so-distant future where this repair can happen directly in your kitchen, whereby if one robot fails to handle your mug correctly, then all robots around the world learn from that mistake.

We are committed to achieving dexterity and reliability in open-world manipulation. Loading a dishwasher is just one example in a series of experiments we will be using at TRI to focus on this problem.

It’s a long journey, but ultimately it will produce capabilities that will bring more advanced robots into the home. When this happens, we hope that older adults will have the help they need to age in place with dignity, working with a robotic helper that will amplify their capabilities, while allowing more independence, longer.

Editor’s note: This post by Dr. Russ Tedrake, vice president of robotics research at TRI and a professor at the Massachusetts Institute of Technology, is republished with permission from the Toyota Research Institute.