Ultra-low power hybrid chips make small robots smarter


Robotics Summit & Expo 2019 logoKeynotes | Agenda | Speakers | Exhibitors | Register

An ultra-low power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC) – which operates on milliwatts of power – could help intelligent swarm robots operate for hours instead of minutes.

To conserve power, the chips use a hybrid digital-analog time-domain processor in which the pulse-width of signals encodes information. The neural network IC accommodates both model-based programming and collaborative reinforcement learning, potentially providing the small robots larger capabilities for reconnaissance, search-and-rescue and other missions.

“We are trying to bring intelligence to these very small robots so they can learn about their environment and move around autonomously, without infrastructure,” said Arijit Raychowdhury, associate professor in Georgia Tech’s School of Electrical and Computer Engineering. “To accomplish that, we want to bring low-power circuit concepts to these very small devices so they can make decisions on their own. There is a huge demand for very small, but capable robots that do not require infrastructure.”

The cars demonstrated by Raychowdhury and graduate students Ningyuan Cao, Muya Chang and Anupam Golder navigate through an arena floored by rubber pads and surrounded by cardboard block walls. As they search for a target, the robots must avoid traffic cones and each other, learning from the environment as they go and continuously communicating with each other.

The cars use inertial and ultrasound sensors to determine their location and detect objects around them. Information from the sensors goes to the hybrid ASIC, which serves as the “brain” of the vehicles. Instructions then go to a Raspberry Pi controller, which sends instructions to the electric motors.

In palm-sized robots, three major systems consume power: the motors and controllers used to drive and steer the wheels, the processor, and the sensing system. In the cars built by Raychowdhury’s team, the low-power ASIC means that the motors consume the bulk of the power. “We have been able to push the compute power down to a level where the budget is dominated by the needs of the motors,” he said.

The team is working with collaborators on motors that use micro-electromechanical (MEMS) technology able to operate with much less power than conventional motors.

“We would want to build a system in which sensing power, communications and computer power, and actuation are at about the same level, on the order of hundreds of milliwatts,” said Raychowdhury, who is the ON Semiconductor Associate Professor in the School of Electrical and Computer Engineering. “If we can build these palm-sized robots with efficient motors and controllers, we should be able to provide runtimes of several hours on a couple of AA batteries. We now have a good idea what kind of computing platforms we need to deliver this, but we still need the other components to catch up.”

In time domain computing, information is carried on two different voltages, encoded in the width of the pulses. That gives the circuits the energy-efficiency advantages of analog circuits with the robustness of digital devices.

“The size of the chip is reduced by half, and the power consumption is one-third what a traditional digital chip would need,” said Raychowdhury. “We used several techniques in both logic and memory designs for reducing power consumption to the milliwatt range while meeting target performance.”

With each pulse-width representing a different value, the system is slower than digital or analog devices, but Raychowdhury says the speed is sufficient for the small robots. (A milliwatt is a thousandth of a watt).

A robotic car controlled by an ultra-low power hybrid chip showed its ability to learn and collaborate with other robots. (Photo: Allison Carter/Georgia Tech)

“For these control systems, we don’t need circuits that operate at multiple gigahertz because the devices aren’t moving that quickly,” he said. “We are sacrificing a little performance to get extreme power efficiencies. Even if the compute operates at 10 or 100 megahertz, that will be enough for our target applications.”

The 65-nanometer CMOS chips accommodate both kinds of learning appropriate for a robot. The system can be programmed to follow model-based algorithms, and it can learn from its environment using a reinforcement system that encourages better and better performance over time – much like a child who learns to walk by bumping into things.

“You start the system out with a predetermined set of weights in the neural network so the robot can start from a good place and not crash immediately or give erroneous information,” Raychowdhury said. “When you deploy it in a new location, the environment will have some structures that it will recognize and some that the system will have to learn. The system will then make decisions on its own, and it will gauge the effectiveness of each decision to optimize its motion.”

Communication between the robots allow them to collaborate to seek a target.

“In a collaborative environment, the robot not only needs to understand what it is doing, but also what others in the same group are doing,” he said. “They will be working to maximize the total reward of the group as opposed to the reward of the individual.”

With their ISSCC demonstration providing a proof-of-concept, the team is continuing to optimize designs and is working on a system-on-chip to integrate the computation and control circuitry.

“We want to enable more and more functionality in these small robots,” Raychowdhury added. “We have shown what is possible, and what we have done will now need to be augmented by other innovations.”

Editor’s Note: This article was republished from Georgia Tech Research Horizons.

The post Ultra-low power hybrid chips make small robots smarter appeared first on The Robot Report.

How sea slugs could lead to more energy-efficient robots


Robotics Summit & Expo 2019 logoKeynotes | Agenda | Speakers | Exhibitors | Register

What do pizza slices, sea slugs and one possible design for future soft-bodied robots have in common? They all have frilly surfaces, and new insights about the surprising geometry of frilly surfaces may help a future generation of energy-efficient and extremely flexible soft-body robots move.

The complex folds of a frilly surface like coral reefs or kale leaves is a surface mathematicians refer to as an “inflected nonsmooth surface.” It changes the direction in which it bends.

“People have looked at these hyperbolic surfaces for 200 years, but nobody has thought about the role of smoothness in relation to how these things move, their mechanics,” said University of Arizona mathematician Shankar Venkataramani. “Nobody saw a relevance to these things until now.”

Venkataramani will present his group’s research on nonsmooth surfaces, sea slugs and possible robotic applications at the 2019 American Physical Society March Meeting in Boston.

Until recently, Venkataramani said, physicists generally assumed that natural frills occur when the balanced forces between simultaneous bending and stretching of a sheet cause the surface to crumple. However, Venkataramani, in recent work with doctoral students John Gemmer and Toby Shearman and Hebrew University physicist Eran Sharon, showed that there can be nonsmooth surfaces that are simultaneously unstretched yet frilly.

“The idea that these frilly surfaces don’t have stretching in them, that was completely counterintuitive,” he said.

And, he noted, the research showed that changes from one form to another appear to require very little energy. This is key since the ability to change the geometry of surfaces has big implications for their strength and thus ability to act on the surroundings. Pick up a soggy slice of pizza and it creates a mess but “put a little curvature and it becomes stiff and you can eat it,” he said.

Having developed the mathematics to describe these surfaces, his group modeled nonsmooth thin films with six up-and-down portions and wondered how they would move.

“We realized that nature already solved the problem millions of years ago. Some sea slugs and marine worms use this geometry to get around,” Venkataramani said.

The challenge now, he said, is determining exactly how the distinctive swimming gait of these soft-bodied marine invertebrates, such as the Spanish dancer sea slug, is related to their nonsmooth geometry.

The answer may provide “a potential avenue for building soft robots that are energy-efficient and extremely flexible,” Venkataramani said.

Editor’s Note: This article was republished from the American Physical Society.

The post How sea slugs could lead to more energy-efficient robots appeared first on The Robot Report.

Fears of job-stealing robots are misplaced, say experts

Fears of job-stealing robots are misplaced, say experts

Artificial intelligence will shift jobs, not replace them. | Reuters/Issei Kato

Some good news: The robots aren’t coming for your job. Experts at the Conference on the Future of Work at Stanford University last month said that fears that rapid advances in artificial intelligence, machine learning, and automation will leave all of us unemployed are vastly overstated.

But concerns over growing inequality and the lack of opportunity for many in the labor force — serious matters linked to a variety of structural changes in the economy — are well-founded and need to be addressed, four scholars on artificial intelligence and the economy told an audience at Stanford Graduate School of Business (GSB).

That’s not to say that AI isn’t having a profound effect on many areas of the economy. It is, of course. But understanding the link between the two trends is difficult, and it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist, during the forum, which was sponsored by the Stanford Institute for Human-Centered Artificial Intelligence.

Today’s workforce is sharply divided by levels of education, and those who have not gone beyond high school are affected the most by long-term changes in the economy, said David Autor, professor of economics at the Massachusetts Institute of Technology.

“It’s a great time to be young and educated. But there’s no clear land of opportunity” for adults who haven’t been to college, said Autor during his keynote presentation.

When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation, said Varian, founding dean of the School of Information at the University of California, Berkeley. Most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.

However, demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude, he said. Demographic trends are also easier to predict, since we already know, aside from immigration and catastrophes, how many 40-year-olds will live in a country 30 years from now.

Comparing the most aggressive expert estimates about the impact of automation on labor supply with demographic trends that point to a workforce reduction, Varian said he found that the demographic effect on the labor market is 53% larger than the automation effect. Thus, real wages are more likely to increase than to decrease when both factors are considered.

Automation’s slow crawl

Why hasn’t automation had a more significant effect on the economy to date? The answer isn’t simple, but there’s one key factor: Jobs are made up of a myriad of tasks, many of which are not easily automated.

“Automation doesn’t generally eliminate jobs,” Varian said. “Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare.”

Consider the job of a gardener. Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores. Mowing and watering are easy tasks to automate, but other chores would cost too much to automate or would be beyond the capabilities of machines — so gardeners are still in demand.

Automation doesn’t generally eliminate jobs. Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare. –Hal Varian, chief economist, Google

Some jobs, including within the service industry, seem ripe for automation. However, a hotel in Nagasaki, Japan, was the subject of amused news reports when it was forced to “fire” its incompetent robot receptionists and room attendants.

Jobs, unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator, Varian observed. But some of the tasks carried out by elevator operators, such as greeting visitors and guiding them to the right office, have been distributed to receptionists and security guards.

Even the automotive industry, which accounts for roughly half of all robots used by industry, has found that automation has its limits.

“Excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated,” Elon Musk, the founder and chief executive of Tesla Motors, said last year.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

The pace of jobs change

Technology has always changed rapidly, and that’s certainly the case today. However, there’s often a lag between the time a new machine or process is invented and when it reverberates in the workplace.

“The workplace isn’t evolving as fast as we thought it would,” Paul Oyer, a Stanford GSB professor of economics and senior fellow at the Stanford Institute for Economic Policy Research, said during a panel discussion at the forum. “I thought the gig economy would take over, but it hasn’t. And I thought that by now people would find their ideal mates and jobs online, but that was wrong too.”

Consider the leap from steam power to electric power. When electricity first became available, some factories replaced single large steam engines on the factory floor with a single electric motor. That didn’t make a significant change to the nature of factory work, says Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy. But when machinery throughout the factory was electrified, work changed radically.

The rise of the service sector

Employment in some sectors in which employees tend to have less education is still strong, particularly the service sector. As well-paid professionals settle in cities, they create a demand for services and new types of jobs. MIT’s Autor called these occupations “wealth work jobs,” which include employment for everything from baristas to horse exercisers.

The 10 most common occupations in the U.S. include such jobs as retail salespersons, office clerks, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer make the list.

Looming over all of the changes to the labor force is the stark fact that birth rates in the U.S. are at an all-time low, said Varian. As has been widely reported, the aging of the baby boom generation creates demand for service jobs but leaves fewer workers actively contributing labor to the economy.

Even so, the U.S. workforce is in much better shape than other industrialized countries. The so-called dependency ratio — the proportion of people over 65 compared with every 100 people of working age — will be much higher in Japan, Spain, South Korea, Germany, and Italy by 2050. And not coincidentally, said Varian, countries with high dependency ratios are looking the hardest at automating jobs.

As the country ages, society will have to find new, more efficient ways to train and expand the workforce, said the panelists. They will also have to better accommodate the growing number of women in the workforce, many of whom are still held back by family and household responsibilities.

The robots may not be taking over just yet, but advances in artificial intelligence and machine learning will eventually become more of a challenge to the workforce. Still, it’s heartening to be reminded that, for now, “humans are underrated.”

Editor’s note: This piece was originally published by Stanford Graduate School of Business.

The post Fears of job-stealing robots are misplaced, say experts appeared first on The Robot Report.

Brain code can now be copied for AI, robots, say researchers

Brain code can be copied for AI, robots, say researchers

KAST researchers exploring the human brain as a model for robots, from left: Ph.D. candidate Su Jin An, Dr. Jee Hang Lee, and Prof. Sang Wan Lee. Source: KAIST

Researchers at the Korea Advanced Institute of Science and Technology (KAIST), the University of Cambridge, Japan’s National Institute for Information and Communications Technology (NICT), and Google DeepMind have argued that our understanding of how humans make intelligent decisions has now reached a critical point. Robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses when we make decisions in our everyday lives, they said last week.

In our rapidly changing world, both humans and autonomous robots constantly need to learn and adapt to new environments. But the difference is that humans are capable of making decisions according to the unique situations, whereas robots still rely on predetermined data to make decisions.

Rapid progress has been made in strengthening the physical capability of robots. However, their central control systems, which govern how robots decide what to do at any one time, are still inferior to those of humans. In particular, they often rely on pre-programmed instructions to direct their behavior, and lack the hallmark of human behavior, that is, the flexibility and capacity to quickly learn and adapt.

Applying neuroscience to the robot brain

Applying neuroscience in robotics, Prof. Sang Wan Lee from the Department of Bio and Brain Engineering at KAIST and Prof. Ben Seymour from the University of Cambridge and NICT proposed a case in which robots should be designed based on the principles of the human brain. They argue that robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses during decision-making processes in everyday life.

The problem with importing human-like intelligence into robots has always been a difficult task without knowing the computational principles for how the human brain makes decisions — in other words, how to translate brain activity into computer code for the robots’ “brains.”

Modeling robotic intelligence on the human brain

Brain-inspired solutions to robot learning. Neuroscientific views on various aspects of learning and cognition converge and create a new idea called “prefrontal metacontrol,” which can inspire researchers to design learning agents for key challenges in robotics such as performance-efficiency-speed, cooperation-competition, and exploration-exploitation trade-offs (Science Robotics)

However, researchers now argue that, following a series of recent discoveries in the field of computational neuroscience, there is enough of this code to effectively write it into robots. One of the examples discovered is the human brain’s “meta-controller.” It is a mechanism by which the brain decides how to switch between different subsystems to carry out complex tasks.

Another example is the human pain system, which allows them to protect themselves in potentially hazardous environments.

“Copying the brain’s code for these could greatly enhance the flexibility, efficiency, and safety of robots,” said Prof. Lee.

An interdisciplinary approach

The team argued that this inter-disciplinary approach will provide just as many benefits to neuroscience as to robotics. The recent explosion of interest in what lies behind psychiatric disorders such as anxiety, depression, and addiction has given rise to a set of sophisticated theories that are complex and difficult to test without some sort of advanced situation platform.

Modeling robotics on the human brain

Overview of neuroscience-robotics approach for decision-making. The figure details key areas for interdisciplinary study (Current Opinion in Behavioral Sciences)

“We need a way of modeling the human brain to find how it interacts with the world in real-life to test whether and how different abnormalities in these models give rise to certain disorders,” explained Prof. Seymour. “For instance, if we could reproduce anxiety behavior or obsessive-compulsive disorder in a robot, we could then predict what we need to do to treat it in humans.”

The team expects that producing robot models of different psychiatric disorders, in a similar way to how researchers use animal models now, will become a key future technology in clinical research.

Sympathy for the robot

The team also stated that there may also be other benefits to humans and intelligent robots learning, acting, and behaving in the same way. In future societies in which humans and robots live and work amongst each other, the ability to cooperate and empathize with robots might be much greater if we feel they think like us.

“We might think that having robots with the human traits of being a bit impulsive or overcautious would be a detriment, but these traits are an unavoidable by-product of human-like intelligence,” said Prof. Seymour. “And it turns out that this is helping us to understand human behavior as human.”

The framework for achieving this brain-inspired artificial intelligence was published in two journals, Science Robotics on Jan. 16 and Current Opinion in Behavioral Sciences on Feb. 6, 2019.

The post Brain code can now be copied for AI, robots, say researchers appeared first on The Robot Report.

Robotics cluster in Odense, Denmark, offers metrics for growth

Robotics cluster in Odense, Denmark, offers metrics for growth

What makes a robotics cluster successful? Proximity to university research and talent, government support of entrepreneurship, and a focus on industry end users are all important. Around the world, regions have proclaimed initiatives to become “the next Silicon Valley.” However, there have been relatively few metrics to describe robotics hubs — until now.

This week, Odense Robotics in Denmark released a report on the economic returns generated by its member companies. Both the amount of exports and the number of employees have increased by about 50 percent, according to Mikkel Christoffersen, business manager at Odense Robotics.

At the same time, the report is realistic about the ongoing challenges facing every robotics cluster, including finding qualified job candidates. As locales from India to Israel and Canada to China look to stimulate innovation, they should look at their own mixes of people, partnerships, and economic performance.

Membership and money

The Odense robotics cluster currently has 129 member companies and more than 10 research and educational institutions. That’s up from 85 in 2015 and comparable with Massachusetts, which is home to more than 150 robotics companies. The Massachusetts Robotics Cluster said it had 122 members as of 2016.

Silicon Valley Robotics says it has supported 325 robot startups, and “Roboburgh” in Pittsburgh includes more than 50 organizations..

In terms of economic performance, the Odense robotics cluster had 763 million euros ($866.3 million U.S.) in turnover, or revenue, in 2017. It expects another 20 percent increase by 2021.

Odense has been friendly to startups, with 64 founded since 2010. The Odense Robotics StartUp Hub has helped to launch 15 companies. Seventy companies, or 54 percent, of those in the Odense area have fewer than 10 employees.

Total investments in the Danish robotics cluster have risen from 322 million euros ($365.6 million) in 2015 to 750 million euros ($851.7 million) last year, with 42 percent coming from investors rather than public funding or loans.

Funding for companies in the Odense robotics cluster continues to rise.

Source: Odense Robotics

In addition, 71 local companies were robotics producers, up from 58 in 2017. The next largest category was integrators at 23. The region also boasted 509 million euros ($577.9 million) in exports in 2017, and 66 percent of its members expect to begin exports.

Market focus

The Odense Robotics report notes that a third of its member companies work with collaborative and mobile robots, representing its focus on manufacturing and supply chain customers. Those are both areas of especially rapid growth in the wider robotics ecosystem.

The global collaborative robotics market will experience a compound annual growth rate (CAGR) of 49.8 percent between 2016 and 2025, compared with a CAGR of 12.1 percent for industrial robots, predicts ABI Research. Demand from small and midsize enterprises will lead revenues to exceed $1.23 billion in 2025, said ABI.

Odense-based Universal Robots A/S is the global market leader in cobot arms. Odense-based gripper maker OnRobot A/S was formed last year by the merger of three companies, and it has since acquired Purple Robotics and raised hundreds of millions in additional funding.

OnRobot Grippers

OnRobot’s lineup of robotic grippers. Source: OnRobot

Similarly, the market for autonomous mobile robots will have a 24 percent CAGR between 2018 and 2022, according to a Technavio forecast. Odense-based Mobile Industrial Robots ApS (MiR) has tripled its sales in each of the past two years.

Both Universal Robots and MiR have broadened their international reach, thanks to ownership by Teradyne Inc. in North Reading, Mass.

Robotics cluster must address talent shortage

Odense Robotics said that its robotics cluster employs 3,600 people today and expects that figure to rise to 4,900 by next year. In comparison, the Massachusetts robotics cluster employed about 4,700 people in 2016.

Odense robotics cluster employee growth

The Danish robotics cluster is a significant employer. Source: Odense Robotics

Even as the numbers of people grow at larger robotics companies (with 50 or more employees) or abroad, businesses in southern Denmark have to look far afield to meet their staffing needs. More than a third, or 39 percent, said they expect to hire from outside of Denmark, and 78 percent said that finding qualified recruits is the biggest barrier to growth.

The average age of employees in the Odense robotics cluster reflects experience, as well as difficulty recruiting. Fifty-five percent of them are age 40 to 60, while only 18 percent are under 30.

This reflects a larger problem for robotics developers and vendors. Even with STEM (science, technology, engineering, and mathematics) programs and attention paid to education, the demand for hardware and software engineers worldwide outstrips the available pool.

The University of Southern Denmark (SDU) is working to address this. It has increased admissions for its bachelor’s degrees in engineering and science and master’s of science programs from 930 in 2015 to 1,235 last year. The university also launched a bachelor’s in engineering for robot systems, admitting 150 students since 2017.

Robotics cluster in Odense includes DTI

The Danish Technological Institute is expanding its facilities in Odense this year. Source: DTI

Another positive development that other robotics clusters can learn from Odense is that 41 percent of workers at robotics firms there went to vocational schools rather than universities.

Partnerships and prospects

Close collaboration with research institutions, fellow robotics cluster members, and international companies has helped the Odense hub grow. Seventy eight percent of cluster members collaborate among themselves, according to the report. Also, 38 percent collaborate with more than 10 companies.

The Odense robotics cluster grew out of a partnership between shipping giant Maersk A/S and SDU. The Maersk Mc-Kinney Moller Institute at SDU continues to conduct research into robotics, artificial intelligence, and systems for healthcare and the energy industry. It recently added aerial drones, soft robotics, and virtual reality to its portfolio.

Last year, the institute invested 13.4 million euros ($15.22 million) in an Industry 4.0 laboratory, and an SDU team won in the industrial robot category at the World Robot Summit Challenge in Japan.

Examples such as Universal Robots and MiR, as well as Denmark’s central position in Northern Europe, are encouraging companies to look for partners. Collaborating with companies inside and outside the Odense robotics cluster is a top priority of members, with 98 percent planning to make it a strategic focus in the next three years.

Of course, the big opportunity and competitive challenge is China, which is potentially a much bigger market than the U.S. or Europe and is trying to build up its own base of more than 800 robotics companies.

It’s only through collective action around robotics clusters that smart regions, large and small, can find their niches, build talent, and maximize the returns on their investments.

Editor’s note: A panel at the Robotics Summit & Expo in Boston on June 5 and 6, 2019, will feature speakers from different robotics clusters. Register now to attend.

The post Robotics cluster in Odense, Denmark, offers metrics for growth appeared first on The Robot Report.

Inside NVIDIA’s new robotics research lab


NVIDIA CEO Jensen Huang (left) and Senior Director of Robotics Research Dieter Fox at NVIDIA’s robotics lab.

The Robot Report named NVIDIA a must-watch robotics company in 2019 due to its new Jetson AGX Xavier Module that it hopes will become the go-to brain for next-generation robots. Now there’s even more reason to keep an eye on NVIDIA’s robotics moves: the Santa Clara, Calif.-based chipmaker just opened its first full-blown robotics research lab.

Located in Seattle just a short walk from the University of Washington, NVIDIA’s robotics lab is tasked with driving breakthrough research to enable next-generation collaborative robots that operate robustly and safely among people. NVIDIA’s robotics lab is led by Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering.

“All of this is working toward enabling the next generation of smart manipulators that can also operate in open-ended environments where not everything is designed specifically for them,” said Fox. “By pulling together recent advances in perception, control, learning and simulation, we can help the research community solve some of the greatest challenges in robotics.”

The 13,000-square-foot lab will be home to 50 roboticists, consisting of 20 NVIDIA researchers plus visiting faculty and interns from around the world. NVIDIA wants robots to be able to naturally perform tasks alongside people in real-world, unstructured environments. To do that, the robots need to be able to understand what a person wants to do and figure out how to help achieve a goal.

The idea for NVIDIA’s robotics lab came in the summer of 2017 in Hawaii. Fox and NVIDIA CEO Jensen Huang met at CVPR, an annual computer vision conference, and discussed the exciting areas and difficult problems ongoing in robotics.

“NVIDIA dedicates itself to solving the very difficult challenges that computing can solve. And robotics is unquestionably one of the final frontiers of artificial intelligence. It requires the convergence of so many types of technologies,” Huang told The Robot Report. “We wanted to dedicate ourselves to make a contribution to the field of robotics. Along the way it’s going to spin off all kinds of great computer science and AI knowledge. We really hope the technology that will be created will allow industries from healthcare to manufacturing to transportation and logistics to make a great advance.”

NVIDIA said there are about a dozen projects currently underway, and NVIDIA will open source its research papers. Fox said NVIDIA is primarily interested, early on at least, in sharing its software developments with the robotics community. “Some of the core techniques you see in the kitchen demo will be wrapped up into really robust components,” Fox said.

We attended the official opening of NVIDIA’s robotics research lab. Here’s a peek inside.

Mobile manipulator in the kitchen

NVIDIA robotics lab

NVIDIA’s mobile manipulator includes a Franka Emika Panda cobot on a Segway RMP 210 UGV. (Credit: NVIDIA)

The main test area inside NVIDIA’s robotics lab is a kitchen the company purchased from IKEA. A mobile manipulator, consisting of a Franka Emika Panda cobot arm on a Segway RMP 210 UGV, will try its hand at increasingly difficult tasks, ranging from from retrieving objects from cabinets to learning how to clean the dining table to helping a person cook a meal.

During the open house, the mobile manipulator consistently fetched objects and put them in a drawer, opening and closing the drawer with its gripper. Fox admitted this first task is somewhat easy. The robot uses deep learning to detect specific objects solely based on its own simulation and doesn’t require any manual data labeling. The robot uses the NVIDIA Jetson platform for navigation and performs real-time inference for processing and manipulation on NVIDIA TITAN GPUs. The deep learning-based perception system was trained using the cuDNN-accelerated PyTorch deep learning framework.

Fox also made it clear why NVIDIA chose to test a mobile manipulator in a kitchen. “The idea to choose the kitchen was not because we think the kitchen is going to be the killer app in the home,” said Fox. “It was really just a stand in for these other domains.” A kitchen is a structured environment, but Fox said it is easy to introduce new variables to the robot in the form of more complex tasks, such as dealing with unknown objects or assisting a person who is cooking a meal.”

Deep Object Pose Estimation

DOPE NVIDIA robotics lab

NVIDIA Deep Object Pose Estimation (DOPE) system. (Credit: NVIDIA)

NVIDIA introduced its Deep Object Pose Estimation (DOPE) system in October 2018 and it was on display in Seattle. With NVIDIA’s algorithm and a single image, a robot can infer the 3D pose of an object for the purpose of grasping and manipulation. DOPE was trained solely on synthetic data.

One of the key challenges of synthetic data is the ability to bridge the reality gap so that networks trained on synthetic data operate correctly with real-world data. NVIDIA said its one-shot deep neural network, albeit on a limited basis, has accomplished that. The system approaches its grasps in two steps. First, the deep neural network estimates belief maps of 2D keypoints of all the objects in the image coordinate system. Next, peaks from these belief maps are fed to a standard perspective-n-point (PnP) algorithm to estimate the 6-DoF pose of each object instance.

Read our interview about the DOPE system with Stan Birchfield, a Principal Research Scientist at NVIDIA, here.

Tactile sensing

NVIDIA had two demos showcasing tactile sensing, which is a missing element for commercialized robotic grippers. One demo featured a ReFlex TakkTile 2 gripper from RightHand Robotics, which recently raised $23 million for its piece-picking technology. The ReFlex TakkTile 2 is a ROS-compatible robotic gripper with three fingers. The gripper has three bending DOF and 1 coupled rotational DOFs. Sensing capabilities include normal pressure sensors, rotational proximal joint encoders, and fingertip IMUs.

The other demo, run by NVIDIA senior robotics researcher Karl Van Wyk, featured SynTouch tactile sensors retrofitted onto an Allegro robotic hand from South Korea-based Wonik Robotics and a KUKA LBR iiwa cobot. “It almost feels like a pet!” said Huang as he gently touched the robotic fingers, causing them to pull back. “It’s surprisingly therapeutic. Can I have one?”

Van Wyk said tactile sensors are starting to trickle out of research labs and into the real world. “There is a lot of hardening and integration that needs to happen to get them to hold up in the real world, but we’re making a lot of progress there. The world we live in is designed for us, not robots.”

The KUKA LBR iiwa wasn’t using any vision to sense its environment. “The robot can’t see that we’re around it, but we want it be constantly sensing and reacting to its environment. The arm has torque sensing in all of the joints, so it can feel that I’m pushing on it and react to that. It doesn’t need to see me to react to me.

“We have a 16-motor hand over with three primary fingers and an opposable thumb, so it’s like our hands. The reason you want a more complicated gripper like this is you want to eventually be able to manipulate objects in your hands like we do on an daily basis. It is very useful and makes solving physical tasks more efficient. The SynTouch sensors measure what’s going on when we’re touching and manipulating something. Keying off those sensors is important for control. If we can feel the object, we can re-adjust the grip and the finger location.”

Human-robot interaction

HRI NVIDIA robotics lab

Huang tests a control system that enables a robots to mimic human movements. (Credit: NVIDIA)

Another interesting demo was NVIDIA’s “Proprioception Robot,” which is the work of Dr. Madeline Gannon, a multidisciplinary designer nicknamed the “Robot Whisperer” who is inventing better ways to communicate with robots. Using a two-armed ABB YuMi and a Microsoft Kinect on the floor underneath the robot, the system would mimic the movements of the human in front of it.

“With YuMi, you don’t need a roboticist to program a robot. Using NVIDIA’s motion generated algorithms, we can have engaging experiences with lifelike robots.”

You might have heard of Gannon’s recent work at the World Economic Forum in September 2018. She installed 10 industrial robot arms in a row, linking them to a single through a central controller. Using depth sensors at the bases of the robots, they tracked and responded to the movements of people passing by.

“There are so many interesting things that we could spin off in our pursuit of a general AI robot,” said Huang. “For example, it’s very likely that in the near future you’ll have ‘exo-vehicles’ around you, whether it’s an exoskeleton or an exo-something that helps people who are disabled, or helps us be stronger than we are.”

The post Inside NVIDIA’s new robotics research lab appeared first on The Robot Report.

Foldable drone could aid search and rescue missions


foldable drone

This foldable drone can squeeze through gaps and then go back to its previous shape, all the while continuing to fly. (Credit: UZH)

Inspecting a damaged building after an earthquake or during a fire is exactly the kind of job that human rescuers would like drones to do for them. A flying robot could look for people trapped inside and guide the rescue team towards them. But the drone would often have to enter the building through a crack in a wall, a partially open window, or through bars – something the typical size of a drone does not allow.

To solve this problem, researchers from the Robotics and Perception Group at the University of Zurich and the Laboratory of Intelligent Systems at EPFL created a new kind of drone. Both groups are part of the National Centre of Competence in Research (NCCR) Robotics funded by the Swiss National Science Foundation. The researchers wrote a paper about the project called “The Foldable Drone: A Morphing Quadrotor that can Squeeze and Fly.”

Inspired by birds that fold their wings in mid-air to cross narrow passages, the new drone can squeeze itself to pass through gaps and then go back to its previous shape, all the while continuing to fly. And it can even hold and transport objects along the way.

Mobile arms can fold around the main frame

“Our solution is quite simple from a mechanical point of view, but it is very versatile and very autonomous, with onboard perception and control systems,” explains Davide Falanga, researcher at the University of Zurich and the paper’s first author. In comparison to other drones, this morphing drone can maneuver in tight spaces and guarantee a stable flight at all times.

The Zurich and Lausanne teams worked in collaboration and designed a quadrotor with four propellers that rotate independently, mounted on mobile arms that can fold around the main frame thanks to servo-motors. The ace in the hole is a control system that adapts in real time to any new position of the arms, adjusting the thrust of the propellers as the center of gravity shifts.

“The morphing drone can adopt different configurations according to what is needed in the field,” adds Stefano Mintchev, co-author and researcher at EPFL. The standard configuration is X-shaped, with the four arms stretched out and the propellers at the widest possible distance from each other. When faced with a narrow passage, the drone can switch to a “H” shape, with all arms lined up along one axis or to a “O” shape, with all arms folded as close as possible to the body. A “T” shape can be used to bring the onboard camera mounted on the central frame as close as possible to objects that the drone needs to inspect.

To guarantee stable flight at all times, the researchers exploit an optimal control strategy that adapts on the fly to the drone morphology. “We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation.

“The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.”

Foldable drone first step to fully autonomous rescue searches

In the future, the researchers hope to further improve the drone structure so that it can fold in all three dimensions. Most importantly, they want to develop algorithms that will make the drone truly autonomous, allowing it to look for passages in a real disaster scenario and automatically choose the best way to pass through them.

“The final goal is to give the drone a high-level instruction such as ‘enter that building, inspect every room and come back’ and let it figure out by itself how to do it,” says Falanga.

foldable drone

A close-up picture of the foldable drone. (1) Qualcomm Snapdragon Flight onboard computer, provided with a quad-core ARM processor, 2 GB of RAM, an IMU and two cameras. (2) Qualcomm Snapdragon Flight ESCs. (3) Arduino Nano microcontroller. (4) The servo motors used to fold the arms. (Credit: UZH)

Editor’s Note: This article was republished from the University of Zurich.

The post Foldable drone could aid search and rescue missions appeared first on The Robot Report.

Reinforcement learning, YouTube teaching robots new tricks

The sun may be setting on what David Letterman would call “Stupid Robot Tricks,” as intelligent machines are beginning to surpass humans in a wide variety of manual and intellectual pursuits. In March 2016, Google’s DeepMind software program AlphaGo defeated the reining Go champion, Lee Sedol. Go, a Chinese game that originated more than 3,000…

The post Reinforcement learning, YouTube teaching robots new tricks appeared first on The Robot Report.

MIT robots learn to manipulate objects they’ve never seen before

Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up. Certainly there’s been some progress: For decades, robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently,…

The post MIT robots learn to manipulate objects they’ve never seen before appeared first on The Robot Report.

Bat-inspired Robat uses echolocation to map, navigate environment

The “Robat” is a fully autonomous, four-wheeled terrestrial robot with bat-like qualities that uses echolocation, also called bio sonar, to move through novel environments while mapping them based only on sound. It was developed at Tel Aviv University (TAU). Bats use echolocation to map novel environments, navigating them by emitting sound then extracting information from…

The post Bat-inspired Robat uses echolocation to map, navigate environment appeared first on The Robot Report.