Researchers building modular, self-programming robots to improve HRI

Many work processes would be almost unthinkable today without robots. But robots operating in manufacturing facilities have often posed risks to workers because they are not responsive enough to their surroundings.

To make it easier for people and robots to work in close proximity in the future, Prof. Matthias Althoff of the Technical University of Munich (TUM) has developed a new system called (IMPROV) that uses interconnectable modules for self-programming and self-verification.

When companies use robots to produce goods, they generally have to position their automatic helpers in safety cages to reduce the risk of injury to people working nearby. A new system could soon free the robots from their cages and thus transform standard practices in the world of automation.

Althoff has developed a toolbox principle for the simple assembly of safe robots using various components. The modules can be combined in almost any way desired, enabling companies to customize their robots for a wide range of tasks – or simply replace damaged components. Althoff’s system was presented in a paper in the June 2019 issue of Science Robotics.

Built-in chip enables the robot to program itself

Robots that can be configured individually using a set of components have been seen before. However, each new model required expert programming before going into operation. Althoff has equipped each module in his IMPROV robot toolbox with a chip that enables every modular robot to program itself on the basis of its own individual toolkit.

In the Science Robotics paper, the researchers said “self-programming of high-level tasks was not considered in this work. The created models were used for automatically synthesizing model-based controllers, as well as for the following two aspects.”

Self-verification

To account for dynamically changing environments, the robot formally verified, by itself, whether any human could be harmed through its planned actions during its operation. A planned motion was verified as safe if none of the possible future movements of surrounding humans leads to a collision.

Because uncountable possible future motions of surrounding humans exist, Althoff bound the set of possible motions using reachability analysis. Althoff said the inherently safe approach renders robot cages unnecessary in many applications.

Scientist Christina Miller working on the modular robot arm. Credit: A. Heddergott/TUM

Keeping an eye on the people working nearby

“Our modular design will soon make it more cost-effective to build working robots. But the toolbox principle offers an even bigger advantage: With IMPROV, we can develop safe robots that react to and avoid contact with people in their surroundings,” said Althoff.

With the chip installed in each module and the self-programming functionality, the robot is automatically aware of all data on the forces acting within it as well as its own geometry. That enables the robot to predict its own path of movement.

At the same time, the robot’s control center uses input from cameras installed in the room to collect data on the movements of people working nearby. Using this information, a robot programmed with IMPROV can model the potential next moves of all of the nearby workers. As a result, it can stop before coming into contact with a hand, for example – or with other approaching objects.

“With IMPROV we can guarantee that the controls will function correctly. Because the robots are automatically programmed for all possible movements nearby, no human will be able to instruct them to do anything wrong,” says Althoff.

IMPROV shortens cycle times

For their toolbox set, the scientists used standard industrial modules for some parts, complemented by the necessary chips and new components from the 3D printer. In a user study, Althoff and his team showed that IMPROV not only makes working robots cheaper and safer – it also speeds them up: They take 36% less time to complete their tasks than previous solutions that require a permanent safety zone around a robot.

Editor’s Note: This article was republished from the Technical University of Munich.

Rutgers develops system to optimize automated packing


Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

The scientists’ peer-reviewed study (PDF) was published recently at the IEEE International Conference on Robotics and Automation, where it was a finalist for the Best Paper Award in Automation. The study coincides with the growing trend of deploying robots to perform logistics, retail and warehouse tasks. Advances in robotics are accelerating at an unprecedented pace due to machine learning algorithms that allow for continuous experiments.

The video above shows a Kuka LBR iiwa robotic arm tightly packing objects from a bin into a shipping order box (five times actual speed). The researchers used two Intel RealSense SR300 depth-sensing cameras.

Pipeline in terms of control, data flow (green lines) and failure handling (red lines). The blocks identify the modules of the system. Click image to enlarge. | Credit: Rutgers University

Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency. Automating such tasks is important for companies’ competitiveness and allows people to focus on less menial and physically taxing work, according to the Rutgers scientific team.

The Rutgers study focused on placing objects from a bin into a small shipping box and tightly arranging them. This is a more difficult task for a robot compared with just picking up an object and dropping it into a box.

The researchers developed software and algorithms for their robotic arm. They used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures.

Since the study focused on packing cube-shaped objects, a next step would be to explore packing objects of different shapes and sizes. Another step would be to explore automatic learning by the robotic system after it’s given a specific task.

Editor’s Note: This article was republished with permission from Rutgers University.

Stanford Doggo robot acrobatically traverses tough terrain

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics club’s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain, but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Nathan Kau, ’20, a mechanical engineering major and lead for Extreme Mobility. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

Whereas other similar robots can cost tens or hundreds of thousands of dollars and require customized parts, the Extreme Mobility students estimate the cost of Stanford Doggo at less than $3,000 — including manufacturing and shipping costs. Nearly all the components can be bought as-is online. The Stanford students said they hope the accessibility of these resources inspires a community of Stanford Doggo makers and researchers who develop innovative and meaningful spinoffs from their work.

Stanford Doggo can already walk, trot, dance, hop, jump, and perform the occasional backflip. The students are working on a larger version of their creation — which is currently about the size of a beagle — but they will take a short break to present Stanford Doggo at the International Conference on Robotics and Automation (ICRA) on May 21 in Montreal.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

A hop, a jump and a backflip

In order to make Stanford Doggo replicable, the students built it from scratch. This meant spending a lot of time researching easily attainable supplies and testing each part as they made it, without relying on simulations.

“It’s been about two years since we first had the idea to make a quadruped. We’ve definitely made several prototypes before we actually started working on this iteration of the dog,” said Natalie Ferrante, Class of 2019, a mechanical engineering co-terminal student and Extreme Mobility Team member. “It was very exciting the first time we got him to walk.”

Stanford Doggo’s first steps were admittedly toddling, but now the robot can maintain a consistent gait and desired trajectory, even as it encounters different terrains. It does this with the help of motors that sense external forces on the robot and determine how much force and torque each leg should apply in response. These motors recompute at 8,000 times a second and are essential to the robot’s signature dance: a bouncy boogie that hides the fact that it has no springs.

Instead, the motors act like a system of virtual springs, smoothly but perkily rebounding the robot into proper form whenever they sense it’s out of position.

Among the skills and tricks the team added to the robot’s repertoire, the students were exceptionally surprised at its jumping prowess. Running Stanford Doggo through its paces one (very) early morning in the lab, the team realized it was effortlessly popping up 2 feet in the air. By pushing the limits of the robot’s software, Stanford Doggo was able to jump 3, then 3½ feet off the ground.

“This was when we realized that the robot was, in some respects, higher performing than other quadruped robots used in research, even though it was really low cost,” recalled Kau.

Since then, the students have taught Stanford Doggo to do a backflip – but always on padding to allow for rapid trial and error experimentation.

Stanford Doggo robot acrobatically traverses tough terrain

Stanford students have developed Doggo, a relatively low-cost four-legged robot that can trot, jump and flip. (Image credit: Kurt Hickman)

What will Stanford Doggo do next?

If these students have it their way, the future of Stanford Doggo in the hands of the masses.

“We’re hoping to provide a baseline system that anyone could build,” said Patrick Slade, graduate student in aeronautics and astronautics and mentor for Extreme Mobility. “Say, for example, you wanted to work on search and rescue; you could outfit it with sensors and write code on top of ours that would let it climb rock piles or excavate through caves. Or maybe it’s picking up stuff with an arm or carrying a package.”

That’s not to say they aren’t continuing their own work. Extreme Mobility is collaborating with the Robotic Exploration Lab of Zachary Manchester, assistant professor of aeronautics and astronautics at Stanford, to test new control systems on a second Stanford Doggo. The team has also finished constructing a robot twice the size of Stanford Doggo that can carry about 6 kilograms of equipment. Its name is Stanford Woofer.

Note: This article is republished from the Stanford University News Service.

Hank robot from Cambridge Consultants offers sensitive grip to industrial challenges

Robotics developers have taken a variety of approaches to try to equal human dexterity. Cambridge Consultants today unveiled Hank, a robot with flexible robotic fingers inspired by the human hand. Hank uses a pioneering sensory system embedded in its pneumatic fingers, providing a sophisticated sense of touch and slip. It is intended to emulate the human ability to hold and grip delicate objects using just the right amount of pressure.

Cambridge Consultants stated that Hank could have valuable applications in agriculture and warehouse automation, where the ability to pick small, irregular, and delicate items has been a “grand challenge” for those industries.

Picking under pressure

While warehouse automation has taken great strides in the past decade, today’s robots cannot emulate human dexterity at the point of picking diverse individual items from larger containers, said Cambridge Consultants. E‑commerce giants are under pressure to deliver more quickly and at a cheaper price, but still require human operators for tasks that can be both difficult and tedious.

“The logistics industry relies heavily on human labor to perform warehouse picking and packing and has to deal with issues of staff retention and shortages,” said Bruce Ackman, logistics commercial lead at Cambridge Consultants. “Automation of this part of the logistics chain lags behind the large-scale automation seen elsewhere.”

By giving a robot additional human-like senses, it can feel and orient its grip around an object, applying just enough force, while being able to adjust or abandon if the object slips. Other robots with articulated arms used in warehouse automation tend to require complex grasping algorithms, costly sensing devices, and vision sensors to accurately position the end effector (fingers) and grasp an object.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Hank uses sensors for a soft touch

Hank uses soft robotic fingers controlled by airflows that can flex the finger and apply force. The fingers are controlled individually in response to the touch sensors. This means that the end effector does not require millimeter-accurate positioning to grasp an object. Like human fingers, they close until they “feel” the object, said Cambridge Consultants.

With the ability to locate an object, adjust overall system position and then to grasp that object, Hank can apply increased force if a slip is detected and generate instant awareness of a mishandled pick if the object is dropped.

Cambridge Consultants claimed that Hank moves a step beyond legacy approaches to this challenge, which tend to rely on pinchers and suction appendages to grasp items, limiting the number and type of objects they can pick and pack.

“Hank’s world-leading sensory system is a game changer for the logistics industry, making actions such as robotic bin picking and end-to-end automated order fulfillment possible,” said Ackman. “Adding a sense of touch and slip, generated by a single, low-cost sensor, means that Hank’s fingers could bring new efficiencies to giant distribution centers.”

Molded from silicone, Hank’s fingers are hollow and its novel sensors are embedded during molding, with an air chamber running up the center. The finger surface is flexible, food-safe, and cleanable. As a low-cost consumable, the fingers can simply be replaced if they become damaged or worn.

With offices in Cambridge in the U.K.; Boston, Mass.; and Singapore, Cambridge Consultants develops breakthrough products, creates and licenses intellectual property, and provides business and technology consulting services for clients worldwide. It is part of Altran, a global leader in engineering and research and development services. For more than 35 years, Altran has provided design expertise in the automotive, aerospace, defense, industrial, and electronics sectors, among others.

SwRI system tests GPS spoofing of autonomous vehicles


Southwest Research Institute has developed a cyber security system to test for vulnerabilities in automated vehicles and other technologies that use GPS receivers for positioning, navigation and timing.

“This is a legal way for us to improve the cyber resilience of autonomous vehicles by demonstrating a transmission of spoofed or manipulated GPS signals to allow for analysis of system responses,” said Victor Murray, head of SwRI’s Cyber Physical Systems Group in the Intelligent Systems Division.

GPS spoofing is a malicious attack that broadcasts incorrect signals to deceive GPS receivers, while GPS manipulation modifies a real GPS signal. GPS satellites orbiting the Earth pinpoint physical locations of GPS receivers embedded in everything from smartphones to ground vehicles and aircraft.

Illustration of a GPS spoofing attack. Credit: Simon Parkinson

SwRI designed the new tool to meet United States federal regulations. Testing for GPS vulnerabilities in a mobile environment had previously been difficult because federal law prohibits over-the-air re-transmission of GPS signals without prior authorization.

SwRI’s spoofing test system places a physical component on or in line with a vehicle’s GPS antenna and a ground station that remotely controls the GPS signal. The system receives the actual GPS signal from an on-vehicle antenna, processes it and inserts a spoofed signal, and then broadcasts the spoofed signal to the GPS receiver on the vehicle. This gives the spoofing system full control over a GPS receiver.

Related: Watch SwRI engineers trick object detection system

While testing the system on an automated vehicle on a test track, engineers were able to alter the vehicle’s course by 10 meters, effectively causing it to drive off the road. The vehicle could also be forced to turn early or late.

“Most automated vehicles will not rely solely on GPS because they use a combination of sensors such as lidar, camera machine vision, GPS and other tools,” Murray said. “However, GPS is a basis for positioning in a lot of systems, so it is important for manufacturers to have the ability to design technology to address vulnerabilities.”

SwRI develops automotive cybersecurity solutions on embedded systems and internet of things (IoT) technology featuring networks and sensors. Connected and autonomous vehicles are vulnerable to cyber threats because they broadcast and receive signals for navigation and positioning.

The new system was developed through SwRI’s internal research program. Future related research will explore the role of GPS spoofing in drones and aircraft.

Editor’s Note: This article was republished from SwRI’s website.

Researchers back Tesla’s non-LiDAR approach to self-driving cars


 

If you haven’t heard, Tesla CEO Elon Musk is not a LiDAR fan. Most companies working on autonomous vehicles – including Ford, GM Cruise, Uber and Waymo – think LiDAR is an essential part of the sensor suite. But not Tesla. Its vehicles don’t have LiDAR and rely on radar, GPS, maps and other cameras and sensors.

“LiDAR is a fool’s errand,” Musk said at Tesla’s recent Autonomy Day. “Anyone relying on LiDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Musk added. “They’re gonna dump LiDAR, mark my words. That’s my prediction.”

While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.

Tesla’s Sr. Director of AI Andrej Karpathy outlined a nearly identical strategy during Autonomy Day.

“The common belief is that you couldn’t make self-driving cars without LiDARs,” said Kilian Weinberger, associate professor of computer science at Cornell and senior author of the paper Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. “We’ve shown, at least in principle, that it’s possible.”

LiDAR uses lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras rely on two perspectives to establish depth. But critics say their accuracy in object detection is too low. However, the Cornell researchers are saying the date they captured from stereo cameras was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras’ data was being analyzed, they say.

“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger says. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”

Cornell researchers compare AVOD with LiDAR, pseudo-LiDAR, and frontal-view (stereo). Ground- truth boxes are in red, predicted boxes in green; the observer in the pseudo-LiDAR plots (bottom row) is on the very left side looking to the right. The frontal-view approach (right) even miscalculates the depths of nearby objects and misses far-away objects entirely.

For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks (CNNs). The Cornell researchers say CNNs are very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. Again, when Cornell researchers switched the representation from a frontal perspective to a bird’s-eye view, the accuracy more than tripled.

“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information,” said co-author Bharath Hariharan, assistant professor of computer science. “Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”

“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

Giving robots a better feel for object manipulation


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials – “particles” – interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration – such as a “T” shape – that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

A new “particle simulator” developed by MIT improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This could give robots a refined touch for industrial applications or for personal robotics. | Credit: MIT

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material – rigid, deformable, and liquid – to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Editor’s Note: This article was republished with permission from MIT News.

Snake-inspired robot uses kirigami for swifter slithering

Bad news for ophiophobes: Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new and improved snake-inspired soft robot that is faster and more precise than its predecessor.

The robot is made using kirigami — a Japanese paper craft that relies on cuts to change the properties of a material. As the robot stretches, the kirigami surface “pops up” into a 3-D-textured surface, which grips the ground just like snake skin.

The first-generation robot used a flat kirigami sheet, which transformed uniformly when stretched. The new robot has a programmable shell, so the kirigami cuts can pop up as desired, improving the robot’s speed and accuracy.

The research was published in the Proceedings of the National Academy of Sciences.

“This is a first example of a kirigami structure with non-uniform pop-up deformations,” said Ahmad Rafsanjani, a postdoctoral fellow at SEAS and first author of the paper. “In flat kirigami, the pop-up is continuous, meaning everything pops at once. But in the kirigami shell, pop up is discontinuous. This kind of control of the shape transformation could be used to design responsive surfaces and smart skins with on-demand changes in their texture and morphology.”

The new research combined two properties of the material — the size of the cuts and the curvature of the sheet. By controlling these features, the researchers were able to program dynamic propagation of pop ups from one end to another, or control localized pop-ups.

Snake-inspired robot slithers even better than predecessor

This programmable kirigami metamaterial enables responsive surfaces and smart skins. Source: Harvard SEAS

In previous research, a flat kirigami sheet was wrapped around an elastomer actuator. In this research, the kirigami surface is rolled into a cylinder, with an actuator applying force at two ends. If the cuts are a consistent size, the deformation propagates from one end of the cylinder to the other. However, if the size of the cuts are chosen carefully, the skin can be programmed to deform at desired sequences.

“By borrowing ideas from phase-transforming materials and applying them to kirigami-inspired architected materials, we demonstrated that both popped and unpopped phases can coexists at the same time on the cylinder,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the paper. “By simply combining cuts and curvature, we can program remarkably different behavior.”

Related content: 10 biggest challenges in robotics

Next, the researchers aim to develop an inverse design model for more complex deformations.

“The idea is, if you know how you’d like the skin to transform, you can just cut, roll, and go,” said Lishuai Jin, a graduate student at SEAS and co-author of the article.

This research was supported in part by the National Science Foundation. It was co-authored by Bolei Deng.

Editor’s note: This article was republished from the Harvard John A. Paulson School of Engineering and Applied Sciences.

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

Neural network helps autonomous car learn to handle the unknown


Autonomous Vehicles

Shelley, Stanford’s autonomous Audi TTS, performs at Thunderhill Raceway Park. (Credit: Kurt Hickman)

Researchers at Stanford University have developed a new way of controlling autonomous cars that integrates prior driving experiences – a system that will help the cars perform more safely in extreme and unknown circumstances. Tested at the limits of friction on a racetrack using Niki, Stanford’s autonomous Volkswagen GTI, and Shelley, Stanford’s autonomous Audi TTS, the system performed about as well as an existing autonomous control system and an experienced racecar driver.

“Our work is motivated by safety, and we want autonomous vehicles to work in many scenarios, from normal driving on high-friction asphalt to fast, low-friction driving in ice and snow,” said Nathan Spielberg, a graduate student in mechanical engineering at Stanford and lead author of the paper about this research, published March 27 in Science Robotics. “We want our algorithms to be as good as the best skilled drivers—and, hopefully, better.”

While current autonomous cars might rely on in-the-moment evaluations of their environment, the control system these researchers designed incorporates data from recent maneuvers and past driving experiences – including trips Niki took around an icy test track near the Arctic Circle. Its ability to learn from the past could prove particularly powerful, given the abundance of autonomous car data researchers are producing in the process of developing these vehicles.

Physics and learning with a neural network

Control systems for autonomous cars need access to information about the available road-tire friction. This information dictates the limits of how hard the car can brake, accelerate and steer in order to stay on the road in critical emergency scenarios. If engineers want to safely push an autonomous car to its limits, such as having it plan an emergency maneuver on ice, they have to provide it with details, like the road-tire friction, in advance. This is difficult in the real world where friction is variable and often is difficult to predict.

To develop a more flexible, responsive control system, the researchers built a neural network that integrates data from past driving experiences at Thunderhill Raceway in Willows, California, and a winter test facility with foundational knowledge provided by 200,000 physics-based trajectories.

This video above shows the neural network controller implemented on an automated autonomous Volkswagen GTI tested at the limits of handling (the ability of a vehicle to maneuver a track or road without skidding out of control) at Thunderhill Raceway.

“With the techniques available today, you often have to choose between data-driven methods and approaches grounded in fundamental physics,” said J. Christian Gerdes, professor of mechanical engineering and senior author of the paper. “We think the path forward is to blend these approaches in order to harness their individual strengths. Physics can provide insight into structuring and validating neural network models that, in turn, can leverage massive amounts of data.”

The group ran comparison tests for their new system at Thunderhill Raceway. First, Shelley sped around controlled by the physics-based autonomous system, pre-loaded with set information about the course and conditions. When compared on the same course during 10 consecutive trials, Shelley and a skilled amateur driver generated comparable lap times. Then, the researchers loaded Niki with their new neural network system. The car performed similarly running both the learned and physics-based systems, even though the neural network lacked explicit information about road friction.

In simulated tests, the neural network system outperformed the physics-based system in both high-friction and low-friction scenarios. It did particularly well in scenarios that mixed those two conditions.

Simple feedforward-feedback control structure used for path tracking on an automated vehicle. (Credit: Stanford University)

An abundance of data

The results were encouraging, but the researchers stress that their neural network system does not perform well in conditions outside the ones it has experienced. They say as autonomous cars generate additional data to train their network, the cars should be able to handle a wider range of conditions.

“With so many self-driving cars on the roads and in development, there is an abundance of data being generated from all kinds of driving scenarios,” Spielberg said. “We wanted to build a neural network because there should be some way to make use of that data. If we can develop vehicles that have seen thousands of times more interactions than we have, we can hopefully make them safer.”

Editor’s Note: This article was republished from Stanford University.

The post Neural network helps autonomous car learn to handle the unknown appeared first on The Robot Report.