Programmable soft actuators show potential of soft robotics at TU Delft

Researchers at the Delft University of Technology in the Netherlands have developed highly programmable soft actuators that, similar to the human hand, combine soft and hard materials to perform complex movements. These materials have great potential for soft robots that can safely and effectively interact with humans and other delicate objects, said the TU Delft scientists.

“Robots are usually big and heavy. But you also want robots that can act delicately, for instance, when handling soft tissue inside the human body. The field that studies this issue, soft robotics, is now really taking off,” said Prof. Amir Zadpoor, who supervised the research presented the July 8 issue of Materials Horizons.

“What you really want is something resembling the features of the human hand including soft touch, quick yet accurate movements, and power,” he said. “And that’s what our soft 3D-printed programmable materials strive to achieve.”

Tunability

Owing to their soft touch, soft robotics can safely and effectively interact with humans and other delicate objects. Soft programmable mechanisms are required to power this new generation of robots. Flexible mechanical metamaterials, working on the basis of mechanical instability, offer unprecedented functionalities programmed into their architected fabric that make them potentially very promising as soft mechanisms, said the TU Delft researchers.

“However, the tunability of the mechanical metamaterials proposed so far have been very limited,” said first author Shahram Janbaz.

Programmable soft actuators

“We now present some new designs of ultra-programmable mechanical metamaterials, where not only the actuation force and amplitude, but also the actuation mode could be selected and tuned within a very wide range,” explained Janbaz. “We also demonstrate some examples of how these soft actuators could be used in robotics, for instance as a force switch, kinematic controllers, and a pick-and-place end-effector.”

Soft actuators from TU Delft

A conventional robotic arm is modified using the developed soft actuators to provide soft touch during pick-and-place tasks. Source: TU Delft

Buckling

“The function is already incorporated in the material,” Zadpoor explained. “Therefore, we had to look deeper at the phenomenon of buckling. This was once considered the epitome of design failure, but has been harnessed during the last few years to develop mechanical metamaterials with advanced functionalities.”

“Soft robotics in general and soft actuators in particular could greatly benefit from such designer materials,” he added. “Unlocking the great potential of buckling-driven materials is, however, contingent on resolving the main limitation of the designs presented to date, namely the limited range of their programmability. We were able to calculate and predict higher modes of buckling and make the material predisposed to these higher modes.”

3D printing

“So, we present multi-material buckling-driven metamaterials with high levels of programmability,” said Janbaz. “We combined rational design approaches based on predictive computational models with advanced multi-material additive manufacturing techniques to 3D print cellular materials with arbitrary distributions of soft and hard materials in the central and corner parts of their unit cells.”

“Using the geometry and spatial distribution of material properties as the main design parameters, we developed soft mechanical metamaterials behaving as mechanisms whose actuation force and actuation amplitude could be adjusted,” he said.

Editor’s note: This article republished from TU Delft.

The post Programmable soft actuators show potential of soft robotics at TU Delft appeared first on The Robot Report.

KIST researchers teach robot to trap a ball without coding

KIST teaching

KIST’s research shows that robots can be intuitively taught to be flexible by humans rather than through numerical calculation or programming the robot’s movements. Credit: KIST

The Center for Intelligent & Interactive Robotics at the Korea Institute of Science and Technology, or KIST, said that a team led by Dr. Kee-hoon Kim has developed a way of teaching “impedance-controlled robots” through human demonstrations. It uses surface electromyograms of muscles and succeeded in teaching a robot to trap a dropped ball like a soccer player.

A surface electromyogram (sEMG) is an electric signal produced during muscle activation that can be picked up on the surface of the skin, said KIST, which is led by Pres. Byung-gwon Lee.

Recently developed impedance-controlled robots have opened up a new era of robotics based on the natural elasticity of human muscles and joints, which conventional rigid robots lack. Robots with flexible joints are expected to be able to run, jump hurdles and play sports like humans. However, the technology required to teach such robots to move in this manner has been unavailable until recently.

KIST uses human muscle signals to teach robots how to move

The KIST research team claimed to be the first in the world to develop a way of teaching new movements to impedance-controlled robots using human muscle signals. With this technology, which detects not only human movements but also muscle contractions through sEMG, it’s possible for robots to imitate movements based on human demonstrations.

Dr. Kee-hoon Kim’s team said it succeeded in using sEMG to teach a robot to quickly and adroitly trap a rapidly falling ball before it comes into contact with a solid surface or bounces too far to reach — similar to the skills employed by soccer players.

SEMG sensors were attached to a man’s arm, allowing him to simultaneously control the location and flexibility of the robot’s rapid upward and downward movements. The man then “taught” the robot how to trap a rapidly falling ball by giving a personal demonstration. After learning the movement, the robot was able to skillfully trap a dropped ball without any external assistance.

KIST movements

sEMG sensors attached to a man’s arm, allowed him to control the location and flexibility of a robot’s rapid movements. Source: KIST

This research outcome, which shows that robots can be intuitively taught to be flexible by humans, has attracted much attention, as it was not accomplished through numerical calculation or programming of the robot’s movements. This study is expected to help advance the study of interactions between humans and robots, bringing us one step closer to a world in which robots are an integral part of our daily lives.

Kim said, “The outcome of this research, which focuses on teaching human skills to robots, is an important achievement in the study of interactions between humans and robots.”

Robots can play key roles in repairing our infrastructure


Pipeline inspection robot

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various another type of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waster management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive testing[ products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this is could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for the Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

Argo AI, CMU developing autonomous vehicle research center


Argo AI

Argo AI autonomous vehicle. | Credit: Argo AI

Argo AI, a Pittsburgh-based autonomous vehicle company, has donated $15 million to Carnegie Mellon University (CMU) to fund a new research center. The Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research will “pursue advanced research projects to help overcome hurdles to enabling self-driving vehicles to operate in a wide variety of real-world conditions, such as winter weather or construction zones.”

Argo was founded in 2016 by a team with ties to CMU (more on that later). The five-year partnership between Argo and CMU will fund research into advanced perception and next-generation decision-making algorithms for autonomous vehicles. The center’s research will address a number of technical topics, including smart sensor fusion, 3D scene understanding, urban scene simulation, map-based perception, imitation and reinforcement learning, behavioral prediction and robust validation of software.

“We are thrilled to deepen our partnership with Argo AI to shape the future of self-driving technologies,” CMU President Farnam Jahanian said. “This investment allows our researchers to continue to lead at the nexus of technology and society, and to solve society’s most pressing problems.”

In February 2017, Ford announced that it was investing $1 billion over five years in Argo, combining Ford’s autonomous vehicle development expertise with Argo AI’s robotics experience. Earlier this month, Argo unveiled its third-generation test vehicle, a modified Ford Fusion Hybrid. Argo is now testing its autonomous vehicles in Detroit, Miami, Palo Alto, and Washington, DC.

Argo last week released its HD maps dataset, Argoverse. Argo said this will help the research community “compare the performance of different (machine learning – deep net) approaches to solve the same problem.



“Argo AI, Pittsburgh and the entire autonomous vehicle industry have benefited from Carnegie Mellon’s leadership. It’s an honor to support development of the next-generation of leaders and help unlock the full potential of autonomous vehicle technology,” said Bryan Salesky, CEO and co-founder of Argo AI. “CMU and now Argo AI are two big reasons why Pittsburgh will remain the center of the universe for self-driving technology.”

Deva Ramanan, an associate professor in the CMU Robotics Institute, who also serves as machine learning lead at Argo AI, will be the center’s principal investigator. The center’s research will involve faculty members and students from across CMU. The center will give students access to the fleet-scale data sets, vehicles and large-scale infrastructure that are crucial for advancing self-driving technologies and that otherwise would be difficult to obtain.

CMU’s other autonomous vehicle partnerships

This isn’t the first autonomous vehicle company to see potential in CMU. In addition to Argo AI, CMU performs related research supported by General Motors, Uber and other transportation companies.

Its partnership with Uber is perhaps CMU’s most high-profile autonomous vehicle partnership, and it’s for all the wrong reasons. In 2015, Uber announced a strategic partnership with CMU that included the creation of a research lab near campus aimed at kick starting autonomous vehicle development.

But that relationship ended up gutting CMU’s National Robotics Engineering Center (NREC). More than a dozen CMU researchers, including the NREC’s director, left to work at the Uber Advanced Technologies Center.


Argo’s connection to CMU

As mentioned earlier, Argo’s co-founders have strong ties to CMU. Argo Co-founder and president Peter Rander earned his masters and PhD degrees at CMU. Salesky graduated from the University of Pittsburgh in 2002, but worked at the NREC for a number of years, managing a portfolio of the center’s largest commercial programs that included autonomous mining trucks for Caterpillar. In 2007, Salesky led software engineering for Tartan Racing, CMU’s winning entry in the DARPA Urban Challenge.

Salesky departed NREC and joined the Google self-driving car team in 2011 to continue the push toward making self-driving cars a reality. While at Google, Bryan he responsible for the development and manufacture of their hardware portfolio, which included self-driving sensors, computers and several vehicle development programs.

Brett Browning, Argo’s VP of Robotics, received his Ph.D. (2000) and bachelor’s degree in electrical engineering and science from the University of Queensland. He was a senior faculty member at the NREC for 12-plus years, pursuing field robotics research in defense, oil and gas, mining and automotive applications.

Elephant Robotics’ Catbot designed to be a smaller, easier to use cobot


Small and midsize enterprises are just beginning to benefit from collaborative robot arms or cobots, which are intended to be safer and easier to use than their industrial cousins. However, high costs and the difficulty of customization are still barriers to adoption. Elephant Robotics this week announced its Catbot, which it described as an “all in one safe robotic assistant.”

The cobot has six degrees of freedom, has a 600mm (23.6 in.) reach, and weighs 18kg (39.68 lb.). It has a payload capacity of 5kg (11 lb.). Elephant Robotics tested Catbot in accordance with international safety standards EN ISO 13848:2008 PL d and 10218-1: 2011-Clause 5.4.3 for human-machine interaction. A teach pendant and a power box are optional with Catbot.

Elephant Robotics CEO Joey Song studied in Australia. Upon returning home, he said, he “wanted to create a smaller in size robot that will be safe to operate and easy to program for any business owner with just a few keystrokes.”

Song founded Elephant Robotics in 2016 in Shenzhen, China, also known as “the Silicon Valley of Asia.” It joined the HAX incubator and received seed funding from Princeton, N.J.-based venture capital firm SOSV.

Song stated that he is committed in making human-robot collaboration accessible to any small business by eliminating the limitations of high price or requirements for highly skilled programming. Elephant Robotics also makes the Elephant and Panda series cobots for precise industrial automation.

Catbot includes voice controls

Repetitive tasks can lead to boredom, accidents, and poor productivity and quality, noted Elephant Robotics. Its cobots are intended to free human workers to be more creative. The company added that Catbot can save on costs and increase workloads.

Controlling robots, even collaborative robots, can be difficult. This is even harder for robots that need to be precise and safe. Elephant Robotics cited Facebook’s new PyRobot framework as an example of efforts to simplify robotic commands.

Catbot is built on an open platform so developers can share the skills they’ve developed, allowing others to use them or build on top of them.

Elephant Robotics claimed that it has made Catbot smarter and safer than other collaborative robots, offering “high efficiency and flexibility to various industries.” It includes force sensing and voice-command functions.

In addition, Catbot has an “all-in-one” design, cloud-based programming, and quick tool changing.

The catStore virtual shop offers a set of 20 basic skills. Elephant Robotics said that new skills could be developed for specific businesses, and they can be shared with other users on its open platform.

Elephant Robotics' Catbot designed to be a smaller, easier to use cobot

Catbot is designed to provide automated assistance to people in a variety of SMEs. Source: Elephant Robotics

Application areas

Elephant Robotics said its cobots are suitable for assembly, packaging, pick-and-place, and testing tasks, among others. Its arms work with a variety of end effectors. To increase its flexibility, the company said, Catbot is designed to be easy to program, from high-precision tasks to covering “hefty ground projects.”

According to Elephant Robotics, the Catbot can used for painting, photography, and giving massages. It could also be a personal barista or play with humans in a table game. In addition, Catbot could act as a helping hand in research workshops or as an automatic screwdriver, said the company.

Elephant Robotics’ site said it serves the agricultural and food, automotive, consumer electronics, educational and research, household device, and machining markets.

Catbot is available now for preorder, with deliveries set to start in August 2019. Contact Elephant Robotics for more information on price or tech specifications at sales@elephantrobotics.com.

TRI tackles manipulation research for reliable, robust human-assist robots

Wouldn’t it be amazing to have a robot in your home that could work with you to put away the groceries, fold the laundry, cook your dinner, do the dishes, and tidy up before the guests come over? For some of us, a robot assistant – a teammate – might only be a convenience.

But for others, including our growing population of older people, applications like this could be the difference between living at home or in an assisted care facility. Done right, we believe these robots will amplify and augment human capabilities, allowing us to enjoy longer, healthier lives.

Decades of prognostications about the future – largely driven by science fiction novels and popular entertainment – have encouraged public expectations that someday home robots will happen. Companies have been trying for years to deliver on such forecasts and figure out how to safely introduce ever more capable robots into the unstructured home environment.

Despite this age of tremendous technological progress, the robots we see in homes to date are primarily vacuum cleaners and toys. Most people don’t realize how far today’s best robots are from being able to do basic household tasks. When they see heavy use of robot arms in factories or impressive videos on YouTube showing what a robot can do, they might reasonably expect these robots could be used in the home now.

Bringing robots into the home

Why haven’t home robots materialized as quickly as some have come to expect? One big challenge is reliability. Consider:

  • If you had a robot that could load dishes into the dishwasher for you, what if it broke a dish once a week?
  • Or, what if your child brings home a “No. 1 DAD!” mug that she painted at the local art studio, and after dinner, the robot discards that mug into the trash because it didn’t recognize it as an actual mug?

A major barrier for bringing robots into the home are core unsolved problems in manipulation that prevent reliability. As I presented this week at the Robotics: Science and Systems conference, the Toyota Research Institute (TRI) is working on fundamental issues in robot manipulation to tackle these unsolved reliability challenges. We have been pursuing a unique combination of robotics capabilities focused on dexterous tasks in an unstructured environment.

Unlike the sterile, controlled and programmable environment of the factory, the home is a “wild west” – unstructured and diverse. We cannot expect lab tests to account for every different object that a robot will see in your home. This challenge is sometimes referred to as “open-world manipulation,” as a callout to “open-world” computer games.

Despite recent strides in artificial intelligence and machine learning, it is still very hard to engineer a system that can deal with the complexity of a home environment and guarantee that it will (almost) always work correctly.

TRI addresses the reliability gap

Above is a demonstration video showing how TRI is exploring the challenge of robustness that addresses the reliability gap. We are using a robot loading dishes in a dishwasher as an example task. Our goal is not to design a robot that loads the dishwasher, but rather we use this task as a means to develop the tools and algorithms that can in turn be applied in many different applications.

Our focus is not on hardware, which is why we are using a factory robot arm in this demonstration rather than designing one that would be more appropriate for the home kitchen.

The robot in our demonstration uses stereo cameras mounted around the sink and deep learning algorithms to perceive objects in the sink. There are many robots out there today that can pick up almost any object — random object clutter clearing has become a standard benchmark robotics challenge. In clutter clearing, the robot doesn’t require much understanding about an object — perceiving the basic geometry is enough.

For example, the algorithm doesn’t need to recognize if the object is a plush toy, a toothbrush, or a coffee mug. Given this, these systems are also relatively limited with what they can do with those objects; for the most part, they can only pick up the objects and drop them in another location only. In the robotics world, we sometimes refer to these robots as “pick and drop.”

Loading the dishwasher is actually significantly harder than what most roboticists are currently demonstrating, and it requires considerably more understanding about the objects. Not only does the robot have to recognize a mug or a plate or “clutter,” but it has to also understand the shape, position, and orientation of each object in order to place it accurately in the dishwasher.

TRI’s work in progress shows not only that this is possible, but that it can be done with robustness that allows the robot to continuously operate for hours without disruption.

Toyota Research Institute

Getting a grasp on household tasks

Our manipulation robot has a relatively simple hand — a two-fingered gripper. The hand can make relatively simple grasps on a mug, but its ability to pick up a plate is more subtle. Plates are large and may be stacked, so we have to execute a complex “contact-rich” maneuver that slides one gripper finger under and between plates in order to get a firm hold. This is a simple example of the type of dexterity that humans achieve easily, but that we rarely see in robust robotics applications.

Silverware can also be tricky — it is small and shiny, which makes it hard to see with a machine-learning camera. Plus, given that the robot hand is relatively large compared to the smaller sink, the robot occasionally needs to stop and nudge the silverware to the center of the sink in order to do the pick. Our system can also detect if an object is not a mug, plate or silverware and, labeling it as “clutter,” and move it to a “discard” bin.

Connecting all of these pieces is a sophisticated task planner, which is constantly deciding what task the robot should execute next. This task planner decides if it should pull out the bottom drawer of the dishwasher to load some plates, pull out the middle drawer for mugs, or pull out the top drawer for silverware.’

Like the other components, we have made it resilient — if the drawer gets suddenly closed when it was needed to be open, the robot will stop, put down the object on the counter top, and pull the drawer back out to try again. This response shows how different this capability is than a typical precision, repetitive factory robot, which are typically isolated from human contact and environmental randomness.

Related content:

Simulation key to success

The cornerstone of TRI’s approach is the use of simulation. Simulation gives us a principled way to engineer and test systems of this complexity with incredible task diversity and machine learning and artificial intelligence components. It allows us to understand what level of performance the robot will have in your home with your mugs, even though we haven’t been able to test in your kitchen during our development.

An exciting achievement is that we have made great strides in making simulation robust enough to handle the visual and mechanical complexity of this dishwasher loading task and on closing the “sim to real” gap. We are now able to design and test in simulation and have confidence that the results will transfer to the real robot. At long last, we have reached a point where we do nearly all of our development in simulation, which has traditionally not been the case for robotic manipulation research.

We can run many more tests in simulation and more diverse tests. We are constantly generating random scenarios that will test the individual components of the dish loading plus the end-to-end performance.

Let me give you a simple example of how this works. Consider the task of extracting a single mug from the sink.  We generate scenarios where we place the mug in all sorts of random configurations, testing to find “corner cases” — rare situations where our perception algorithms or grasping algorithms might fail. We can vary material properties and lighting conditions. We even have algorithms for generating random, but reasonable, shapes of the mug, generating everything from a small espresso cup to a portly cylindrical coffee mug.

We conduct simulation testing through the night, and every morning we receive a report that gives us new failure cases that we need to address.

Early on, those failures were relatively easy to find, and easy to fix. Sometimes they are failures of the simulator — something happened in the simulator that could never have happened in the real world — and sometimes they are problems in our perception or grasping algorithms. We have to fix all of these failures.

TRI robot

TRI is using an industrial robot for household tasks to test its algorithms. Source: TRI

As we continue down this road to robustness, the failures are getting more rare and more subtle. The algorithms that we use to find those failures also need to get more advanced. The search space is so huge, and the performance of the system so nuanced, that finding the corner cases efficiently becomes our core research challenge.

Although we are exploring this problem in the kitchen sink, the core ideas and algorithms are motivated by, and are applicable to, related problems such as verifying automated driving technologies.

‘Repairing’ algorithms

The next piece of our work focuses on the development of algorithms to automatically “repair” the perception algorithm or controller whenever we find a new failure case. Because we are using simulation, we can test our changes against not only this newly discovered scenario, but also make sure that our changes also work for all of the other scenarios that we’ve discovered in the preceding tests.

Of course, it’s not enough to fix this one test. We have to make sure we also do not break all of the other tests that passed before. It’s possible to imagine a not-so-distant future where this repair can happen directly in your kitchen, whereby if one robot fails to handle your mug correctly, then all robots around the world learn from that mistake.

We are committed to achieving dexterity and reliability in open-world manipulation. Loading a dishwasher is just one example in a series of experiments we will be using at TRI to focus on this problem.

It’s a long journey, but ultimately it will produce capabilities that will bring more advanced robots into the home. When this happens, we hope that older adults will have the help they need to age in place with dignity, working with a robotic helper that will amplify their capabilities, while allowing more independence, longer.

Editor’s note: This post by Dr. Russ Tedrake, vice president of robotics research at TRI and a professor at the Massachusetts Institute of Technology, is republished with permission from the Toyota Research Institute.

AMP Robotics announces largest deployment of AI-guided recycling robots

AMP Robotics announces largest deployment of AI-guided recycling robots

AMP robotics deployment at SSR in Florida. Source: Business Wire

DENVER — AMP Robotics Corp., a pioneer in artificial intelligence and robotics for the recycling industry, today announced the further expansion of AI guided robots for recycling municipal solid waste at Single Stream Recyclers LLC. This follows Single Stream Recyclers’ recent unveiling of its first installation of AMP systems at its state-of-the-art material recovery facility in Florida, the first of its kind in the state.

Single Stream Recyclers (SSR) currently operates six AMP Cortex single-robot systems at its 100,000 square-foot facility in Sarasota. The latest deployment will add another four AMP Cortex dual-robot systems (DRS), bringing the total deployment to 14 robots. The AMP Cortex DRS uses two high-speed precision robots that sort, pick, and place materials. The robots are installed on a number of different sorting lines throughout the facility and will process plastics, cartons, paper, cardboard, metals, and other materials.

“Robots are the future of the recycling industry,” said John Hansen co-owner of SSR. “Our investment with AMP is vital to our goal of creating the most efficient recycling operation possible, while producing the highest value commodities for resale.”

“AMP’s robots are highly reliable and can consistently pick 70-80 items a minute as needed, twice as fast as humanly possible and with greater accuracy,” added Eric Konik co-owner of SSR. “This will help us lower cost, remove contamination, increase the purity of our commodity bales, divert waste from the landfill, and increase overall recycling rates.”

AMP Neuron AI guides materials sorting

The AMP Cortex robots are guided by the AMP Neuron AI platform to perform tasks. AMP Neuron applies computer vision and machine learning to recognize different colors, textures, shapes, sizes, and patterns to identify material characteristics.

Exact down to what brand a package is, the system transforms millions of images into data, directing the robots to pick and place targeted material for recycling. The AI platform digitizes the material stream, capturing data on what goes in and out, so informed decisions can be made about operations.

“SSR has built a world-class facility that sets the bar for modern recycling. John, Eric and their team are at the forefront of their industry and we are grateful to be a part of their plans,” said Matanya Horowitz, CEO of AMP Robotics. “SSR represents the most comprehensive application of AI and robotics in the recycling industry, a major milestone not only for us, but for the advancement of the circular economy.”

The new systems will be installed this summer. Upon completion, AMP’s installation at SSR is believed to be the single largest application of AI guided robots for recycling in the United States and likely the world. In addition to Florida, AMP has installations at numerous facilities across the country including California, Colorado, Indiana, Minnesota, and Wisconsin; with many more planned. Earlier this spring, AMP expanded globally by partnering with Ryohshin Ltd. to bring robotic recycling to Japan.

About AMP Robotics

AMP Robotics is transforming the economics of recycling with AI-guided robots. The company’s high-performance industrial robotics system, AMP Cortex, precisely automates the identification, sorting, and processing of material streams to extract maximum value for businesses that recycle municipal solid waste, e-waste and construction and demolition.

The AMP Neuron AI platform operates AMP Cortex using advanced computer vision and machine learning to continuously train itself by processing millions of material images within an ever-expanding neural network that experientially adapts to changes in a facility’s material stream.

About Single Stream Recyclers

Single Stream Recyclers is a materials recovery facility in Sarasota, Fla. It processes, materials from all over the west coast of Florida. The facility sorts, bales and ships aluminum, cardboard, food and beverage cartons, glass, paper, plastics, metal and other recyclables from residential curbside and commercial recycling collection. SSR is heavily invested in technology to help create the best possible end products and reduce contamination as well as residue.

Researchers building modular, self-programming robots to improve HRI

Many work processes would be almost unthinkable today without robots. But robots operating in manufacturing facilities have often posed risks to workers because they are not responsive enough to their surroundings.

To make it easier for people and robots to work in close proximity in the future, Prof. Matthias Althoff of the Technical University of Munich (TUM) has developed a new system called (IMPROV) that uses interconnectable modules for self-programming and self-verification.

When companies use robots to produce goods, they generally have to position their automatic helpers in safety cages to reduce the risk of injury to people working nearby. A new system could soon free the robots from their cages and thus transform standard practices in the world of automation.

Althoff has developed a toolbox principle for the simple assembly of safe robots using various components. The modules can be combined in almost any way desired, enabling companies to customize their robots for a wide range of tasks – or simply replace damaged components. Althoff’s system was presented in a paper in the June 2019 issue of Science Robotics.

Built-in chip enables the robot to program itself

Robots that can be configured individually using a set of components have been seen before. However, each new model required expert programming before going into operation. Althoff has equipped each module in his IMPROV robot toolbox with a chip that enables every modular robot to program itself on the basis of its own individual toolkit.

In the Science Robotics paper, the researchers said “self-programming of high-level tasks was not considered in this work. The created models were used for automatically synthesizing model-based controllers, as well as for the following two aspects.”

Self-verification

To account for dynamically changing environments, the robot formally verified, by itself, whether any human could be harmed through its planned actions during its operation. A planned motion was verified as safe if none of the possible future movements of surrounding humans leads to a collision.

Because uncountable possible future motions of surrounding humans exist, Althoff bound the set of possible motions using reachability analysis. Althoff said the inherently safe approach renders robot cages unnecessary in many applications.

Scientist Christina Miller working on the modular robot arm. Credit: A. Heddergott/TUM

Keeping an eye on the people working nearby

“Our modular design will soon make it more cost-effective to build working robots. But the toolbox principle offers an even bigger advantage: With IMPROV, we can develop safe robots that react to and avoid contact with people in their surroundings,” said Althoff.

With the chip installed in each module and the self-programming functionality, the robot is automatically aware of all data on the forces acting within it as well as its own geometry. That enables the robot to predict its own path of movement.

At the same time, the robot’s control center uses input from cameras installed in the room to collect data on the movements of people working nearby. Using this information, a robot programmed with IMPROV can model the potential next moves of all of the nearby workers. As a result, it can stop before coming into contact with a hand, for example – or with other approaching objects.

“With IMPROV we can guarantee that the controls will function correctly. Because the robots are automatically programmed for all possible movements nearby, no human will be able to instruct them to do anything wrong,” says Althoff.

IMPROV shortens cycle times

For their toolbox set, the scientists used standard industrial modules for some parts, complemented by the necessary chips and new components from the 3D printer. In a user study, Althoff and his team showed that IMPROV not only makes working robots cheaper and safer – it also speeds them up: They take 36% less time to complete their tasks than previous solutions that require a permanent safety zone around a robot.

Editor’s Note: This article was republished from the Technical University of Munich.

Rutgers develops system to optimize automated packing


Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

The scientists’ peer-reviewed study (PDF) was published recently at the IEEE International Conference on Robotics and Automation, where it was a finalist for the Best Paper Award in Automation. The study coincides with the growing trend of deploying robots to perform logistics, retail and warehouse tasks. Advances in robotics are accelerating at an unprecedented pace due to machine learning algorithms that allow for continuous experiments.

The video above shows a Kuka LBR iiwa robotic arm tightly packing objects from a bin into a shipping order box (five times actual speed). The researchers used two Intel RealSense SR300 depth-sensing cameras.

Pipeline in terms of control, data flow (green lines) and failure handling (red lines). The blocks identify the modules of the system. Click image to enlarge. | Credit: Rutgers University

Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency. Automating such tasks is important for companies’ competitiveness and allows people to focus on less menial and physically taxing work, according to the Rutgers scientific team.

The Rutgers study focused on placing objects from a bin into a small shipping box and tightly arranging them. This is a more difficult task for a robot compared with just picking up an object and dropping it into a box.

The researchers developed software and algorithms for their robotic arm. They used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures.

Since the study focused on packing cube-shaped objects, a next step would be to explore packing objects of different shapes and sizes. Another step would be to explore automatic learning by the robotic system after it’s given a specific task.

Editor’s Note: This article was republished with permission from Rutgers University.

Brain Corp Europe opens in Amsterdam


A BrainOS-powered autonomous floor scrubber. | Credit: Brain Corp

San Diego-based Brain Corp, the Softbank-backed developer of autonomous navigation systems, has opened its European headquarters in Amsterdam. The reason for the expansion is two-fold: it helps Brain better support partners who do business in Europe, and it helps Brain find additional engineering talent.

“Amsterdam is a fantastic gateway to Europe and has one of the largest airports in Europe,” Sandy Agnos, Brain’s Director of Global Business Development, told The Robot Report. “It’s very business and tech friendly. It is the second-fastest-growing tech community, talent-wise, in Europe.”

Brain hired Michel Spruijt to lead Brain Corp Europe. He will be tasked with driving sales of BrainOS-powered machines, providing partner support, and overseeing general operations throughout Europe. Agnos said Brain was impressed by Spruijt’s previous experience growing an office from “a few employees to over 100 was impressive to us.”

“Under Michel Spruijt’s guidance, our vision of a world where the lives of people are made safer, easier, more productive, and more fulfilling with the help of robots will extend into Europe,” said Eugene Izhikevich, Brain Corp’s Co-Founder and CEO.

Agnos said there will initially be about 12 employees at Brain Corp Europe who focus mostly on service and support. She added that Brain is recruiting software engineering talent and will continue to grow the Amsterdam office.

A rendering of how BrainOS-powered machines sense their environment. | Credit: Brain Corp

Brain planning worldwide expansion

The European headquarters marks the second international office in Brain’s global expansion. The company opened an office in Tokyo in 2017. This made sense for a couple of reasons. Japanese tech giant Softbank led Brain’s $114 million funding round in mid-2017 via the Softbank Vision Fund. And Softbank’s new autonomous floor cleaning robot, Whiz, uses Brain’s autonomous navigation stack.

Agnos said Brain is planning to add other regional offices after Amsterdam. The dates are in flux, but future expansion includes:

  • Further growth in Europe in 2020
  • Expansion in Asia Pacific, specifically Australia and Korea, in mid- to late-2020
  • South America afterwards

“We follow our partners’ needs,” said Agnos. “We are becoming a global company with support offices around the world. The hardest part is we can’t expand fast enough. Our OEM partners already have large, global customer bases. We need to have the right people and infrastructure in each location.”

BrainOS-powered robots

BrainOS, the company’s cloud-connected operating system, currently powers thousands of floor care robots across numerous environments. Brain recently partnered with Nilfisk, a Copenhagen, Denmark-based cleaning solutions provider that has been around for 110-plus years. Nilfisk is licensing the BrainOS platform for the production, deployment, and support of its robotic floor cleaners.

Walmart, the world’s largest retailer, has 360 BrainOS-powered machines cleaning its stores across the United States. A human needs to initially teach the BrainOS-powered machines the layout of the stores. But after that initial demo, BrainOS’ combination of off-the-shelf hardware, sensors, and software enable the floor scrubbers to navigate autonomously. Brain employs a collection of cameras, sensors and LiDAR to ensure safety and obstacle avoidance. All the robots are connected to a cloud-based reporting system that allows them to be monitored and managed.

At ProMat 2019, Brain debuted AutoDelivery, a proof-of-concept autonomous delivery robot designed for retail stores, warehouses, and factories. AutoDelivery, which can tow several cart types, boasts cameras, 4G LTE connectivity, and routing algorithms that allow it to learn its way around a store. AutoDelivery isn’t slated for commercial launch until early 2020.

Izhikevich recently told The Robot Report that Brain is exploring other types of mobile applications, including delivery, eldercare, security and more. In July 2018, Brain led a $13.4 million Series B for Savioke, which makes autonomous delivery robots. For years, Savioke built its autonomous navigation stack from scratch using ROS.