Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Autonomous mining haulage in Australia. Source: Wenco

TOKYO — Hitachi Construction Machinery Co. last week announced its vision for autonomous mining — an open, interoperable ecosystem of partners that integrate their systems alongside existing mine infrastructure.

Grounded in support for ISO standards and a drive to encourage new entrants into the mining industry, Hitachi Construction Machinery (HCM) said it is pioneering this approach to autonomy among global mining technology leaders. HCM has now publicly declared support for standards-based autonomy and is offering its technology to assist mining customers in integrating new vendors into their existing infrastructure. HCM’s support for open, interoperable autonomy is based on its philosophy for its partner-focused Solution Linkage platform.

“Open innovation is the guiding technological philosophy for Solution Linkage,” said Hideshi Fukumoto, vice president, executive officer, and chief technology officer at HCM. “Based on this philosophy, HCM is announcing its commitment to championing the customer enablement of autonomous mining through an open, interoperable ecosystem of partner solutions.”

“We believe this open approach provides customers the greatest flexibility and control for integrating new autonomous solutions into their existing operations while reducing associated risks and costs of alternative approaches.,” he said.

The HCM Group is developing this open autonomy approach under the Solution Linkage initiative, a platform already available to HCM’s customers in the construction industry now being made available to mining customers with support from HCM subsidiary Wenco International Mining Systems (Wenco).

Three development principles for Wenco, Hitachi

Solution Linkage is a standards-based platform grounded on three principles: open innovation, interoperability, and a partner ecosystem.

In this context, “open innovation” means the HCM Group’s support for open standards to enable the creation of multi-vendor solutions that reduce costs and increase value for customers.

By designing solutions in compliance with ANSI/ISA-95 and ISO standards for autonomous interoperability, Solution Linkage avoids vendor lock-in and offers customers the freedom to choose technologies from preferred vendors independent of their fleet management system, HCM said. This approach future-proofs customer technology infrastructure, providing a phased approach for their incorporation of new technologies as they emerge, claimed the company.

This approach also benefits autonomy vendors who are new to mining, since they will be able to leverage a HCM’s technology and experience in meeting the requirements of mining customers.

The HCM Group’s key capability of interoperability creates simplified connectivity between systems to reduce operational silos, enabling end-to-end visibility and control across the mining value chain. HCM said that customers can use Solution Linkage to connect autonomous equipment from multiple vendors into existing fleet management and operations infrastructure.

The interoperability principle could also provide mines a systems-level understanding of their pit-to-port operation, providing access to more robust data analytics and process management. This capability would enable mine managers to make superior decisions based on operation-wide insight that deliver end-to-end optimization, said HCM.

Wenco and Hitachi have set open interoperability as goals for mining automation

Mining customers think about productivity and profitability throughout their entire operation, from geology to transportation — from pit to port. Source: Wenco

HCM’s said its partner ecosystem will allow customers and third-party partners to use its experience and open platform to successfully provide autonomous functionality and reduce the risk of technological adoption. This initiative is already working with a global mining leader to integrate non-mining OEM autonomous vehicles into their existing mining infrastructure.

Likewise, HCM is actively seeking customer and vendor partnerships to further extend the value of this open, interoperable platform. If autonomy vendors have already been selected by a customer and are struggling to integrate into the client’s existing fleet management system or mine operations, Hitachi may be able to help using the Solution Linkage platform.

The HCM Group will reveal further details of its approach to open autonomy and Solution Linkage in a presentation at the CIM 2019 Convention, running April 28 to May 1 at the Palais de Congrès in Montreal, Canada. Fukumoto and other senior executives from Hitachi and Wenco will discuss this strategy and details of Hitachi’s plans for mining in several presentations throughout the event. The schedule of Hitachi-related events is as follows:

  • Sunday, April 28, 4:30 PM — A welcome speech at the event’s Opening Ceremonies by Wenco Board Member and HCM Executive Officer David Harvey;
  • Monday, April 29, 10:00 AM — An Innovation Stage presentation on the Solution Linkage vision for open autonomy by Wenco Board Member and HCM Vice President and Executive Officer, CTO Hideshi Fukumoto;
  • Monday, April 29, 12:00 PM — Case Study: Accelerating Business Decisions and Mine Performance Through Operational Data Analysis at an Australian Coal Operation technical breakout presentation by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Monday, April 29, 2:00 PM — Toward an Open Standard in Autonomous Control System Interfaces: Current Issues and Best Practices technical breakout presentation by Wenco Director of Technology Martin Politick;
  • Tuesday, April 30, 10:00 AM — An Innovation Stage presentation on Hitachi’s vision for data and IoT in mining by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Wednesday, May 1, 4:00 PM — A concluding speech at the event’s closing luncheon by Wenco Board Member and HCM General Manager of Solution Business Center Yoshinori Furuno.

These presentations further detail the ongoing work of HCM and support the core message about open, interoperable, partner ecosystems.

To learn more about the HCM announcement in support of open and interoperable mining autonomy, Solution Linkage, or other HCM’s solutions, please contact Hitachi Construction Machinery.

Anki shutdown: how the robotics world is reacting

Dealing yet another massive blow to the consumer robotics industry, Anki shut down after raising $200 million in funding since it was founded in 2010. A new round of funding reportedly fell through at the last minute, leaving the San Francisco-based company with no other option but to lay off its entire staff on Wednesday.

Other recent consumer robotics failures such as Jibo, Keecker, Laundroid and Mayfield Robotics pale in comparison to Anki going out of business. Anki said it had sold more than 1.5 million robots as of late 2018 and made nearly $100 million in revenue in 2017 and expected to exceed that figure in 2018.

As you can imagine, the robotics industry has been reacting to the Anki shutdown all over social media. Many were shocked, many were not, and some are sharing lessons to be learned. Below is a snapshot of the social media reaction. If you’d like to share your thoughts about Anki, please leave a message in the comments.

Giving robots a better feel for object manipulation


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials – “particles” – interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration – such as a “T” shape – that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

A new “particle simulator” developed by MIT improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This could give robots a refined touch for industrial applications or for personal robotics. | Credit: MIT

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material – rigid, deformable, and liquid – to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Editor’s Note: This article was republished with permission from MIT News.

Snake-inspired robot uses kirigami for swifter slithering

Bad news for ophiophobes: Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new and improved snake-inspired soft robot that is faster and more precise than its predecessor.

The robot is made using kirigami — a Japanese paper craft that relies on cuts to change the properties of a material. As the robot stretches, the kirigami surface “pops up” into a 3-D-textured surface, which grips the ground just like snake skin.

The first-generation robot used a flat kirigami sheet, which transformed uniformly when stretched. The new robot has a programmable shell, so the kirigami cuts can pop up as desired, improving the robot’s speed and accuracy.

The research was published in the Proceedings of the National Academy of Sciences.

“This is a first example of a kirigami structure with non-uniform pop-up deformations,” said Ahmad Rafsanjani, a postdoctoral fellow at SEAS and first author of the paper. “In flat kirigami, the pop-up is continuous, meaning everything pops at once. But in the kirigami shell, pop up is discontinuous. This kind of control of the shape transformation could be used to design responsive surfaces and smart skins with on-demand changes in their texture and morphology.”

The new research combined two properties of the material — the size of the cuts and the curvature of the sheet. By controlling these features, the researchers were able to program dynamic propagation of pop ups from one end to another, or control localized pop-ups.

Snake-inspired robot slithers even better than predecessor

This programmable kirigami metamaterial enables responsive surfaces and smart skins. Source: Harvard SEAS

In previous research, a flat kirigami sheet was wrapped around an elastomer actuator. In this research, the kirigami surface is rolled into a cylinder, with an actuator applying force at two ends. If the cuts are a consistent size, the deformation propagates from one end of the cylinder to the other. However, if the size of the cuts are chosen carefully, the skin can be programmed to deform at desired sequences.

“By borrowing ideas from phase-transforming materials and applying them to kirigami-inspired architected materials, we demonstrated that both popped and unpopped phases can coexists at the same time on the cylinder,” said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the paper. “By simply combining cuts and curvature, we can program remarkably different behavior.”

Related content: 10 biggest challenges in robotics

Next, the researchers aim to develop an inverse design model for more complex deformations.

“The idea is, if you know how you’d like the skin to transform, you can just cut, roll, and go,” said Lishuai Jin, a graduate student at SEAS and co-author of the article.

This research was supported in part by the National Science Foundation. It was co-authored by Bolei Deng.

Editor’s note: This article was republished from the Harvard John A. Paulson School of Engineering and Applied Sciences.

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

Drone delivery taking off from Alphabet’s Wing Aviation


A Wing Aviation drone delivers a package to a home during a demo in Blacksburg, Virginia. | Credit: Bloomberg

Alphabet Inc. subsidiary Wing Aviation on Tuesday became the first drone delivery company to be awarded air carrier certification from the Federal Aviation Administration (FAA). With the certification, Wing Aviation now has the same certifications as smaller airlines and can turn its tests into a commercial service that delivers goods from local businesses to homes.

The approval grants Wing permission to conduct flights beyond visual line of sight and over people, Wing says. The company will start commercial deliveries in Blacksburg, Virginia later in 2019. Wing spun out of Alphabet’s X research division in July 2018.

“This is an important step forward for the safe testing and integration of drones into our economy,” said U.S. Secretary of Transportation, Elaine L. Chao, who made the announcement. “Safety continues to be our Number One priority as this technology continues to develop and realize its full potential.”

Part of the approval process required Wing Aviation to submit evidence that its operations are safe. Wing’s drones have flown more than 70,000 test flights and made more than 3,000 deliveries. Wing says it submitted data that shows “delivery by Wing carries a lower risk to pedestrians than the same trip made by car.”

PwC estimates the total addressable market for commercial drones is $127.3 billion. That includes $45.2 billion in infrastructure, $32.4 billion in agriculture, $13 billion in transport and $10.5 billion in security.

Wing’s electric drones are powered by 14 propellers and can carry loads of up to 1.5 kilograms (3.3 pounds). Wing’s drones can fly up to 120 kilometers (about 74.5 miles) per hour and can fly up to 400 feet above the ground. Wing’s drones convert GPS signals into latitude and longitude to determine location and speed.

The drones also have a number of redundant systems on board for operation and navigation, among them a downward-facing camera used as a backup to GPS navigation. If the GPS is unavailable for any reason, the drone uses data from the camera to measure speed, latitude and longitude in its place. The camera is used exclusively for navigation, it doesn’t capture video and is not available in real time.

Drone regulations still don’t permit most flights over crowds and urban areas. This will, of course, limit where Wing can operate. But the company said it plans to start charging soon for deliveries in Blacksburg and eventually apply for permission to expand to other regions.

The first of Wing’s drone deliveries were completed in 2014 in Queensland, Australia, where everything from dog treats to a first-aid kit were delivered to farmers. Two years later, Wing’s drones delivered burritos to Virginia Tech students. “Goods like medicine or food can now be delivered faster by drone, giving families, shift workers, and other busy consumers more time to do the things that matter,” Wing Aviation writes in a blog. “Air delivery also provides greater autonomy to those who need assistance with mobility.”

Just a couple weeks prior to the FAA certification, Wing made its first drone delivery in Cranberry, Australia after recently receiving approval from the country’s Civil Aviation Authority. To start, Wing service will be available to 100 homes and will be slowly expanded to other customers.

Wing was first known as “Project Wing” when it was introduced in 2014. Google X made the announcement via the video below, which shows early test flights in Queensland.

Wing is also launching its first European drone delivery service in Finland this spring. In its tests in Australia, the average Wing delivery was completed in 7 minutes 36 seconds, according to a spokeswoman.

In June 2016, Wing worked with NASA and the FAA to explore how to manage drones. Wing demonstrated its Unmanned Traffic Management (UTM), real-time route planning, and airspace notifications. Wing’s UTM platform is designed to support the growing drone industry by enabling a high volume of drones to share the skies and fly safely over people, around terrain and buildings, and near airports, which remain hurdles before drone deliveries become commonplace in the US. A DJI drone was recently spotted illegally flying over Fenway Park where a ban should have been in place.

Wing says on its website, “we’re working with the FAA on the Low Altitude Authorization and Notification Capability (LAANC) system in the United States and with the Civil Aviation Safety Authority (CASA) in Australia to develop federated, industry-led solutions to safely integrate and manage drones in low-altitude airspace.”

Robotic catheter brings autonomous navigation into human body

 

Robotic catheter brings autonomous navigation into the human body

Concentric tube robot. In a recent demo, robotic catheter autonomously found its way to a leaky heart valve. Source: Pediatric Cardiac Bioengineering Lab, Department of Cardiovascular Surgery, Boston Children’s Hospital, Harvard Medical School

BOSTON — Bioengineers at Boston Children’s Hospital said they successfully demonstrated for the first time a robot able to navigate autonomously inside the body. In a live pig, the team programmed a robotic catheter to find its way along the walls of a beating, blood-filled heart to a leaky valve — without a surgeon’s guidance. They reported their work today in Science Robotics.

Surgeons have used robots operated by joysticks for more than a decade, and teams have shown that tiny robots can be steered through the body by external forces such as magnetism. However, senior investigator Pierre Dupont, Ph.D., chief of Pediatric Cardiac Bioengineering at Boston Children’s, said that to his knowledge, this is the first report of the equivalent of a self-driving car navigating to a desired destination inside the body.

Pierre Dupont

Pierre Dupont, chief of Pediatric Cardiac Bioengieering at Boston Children’s Hospital

Dupont said he envisions autonomous robots assisting surgeons in complex operations, reducing fatigue and freeing surgeons to focus on the most difficult maneuvers, improving outcomes.

“The right way to think about this is through the analogy of a fighter pilot and a fighter plane,” he said. “The fighter plane takes on the routine tasks like flying the plane, so the pilot can focus on the higher-level tasks of the mission.”

Touch-guided vision, informed by AI

The team’s robotic catheter navigated using an optical touch sensor developed in Dupont’s lab, informed by a map of the cardiac anatomy and preoperative scans. The touch sensor uses artificial intelligence and image processing algorithms to enable the catheter to figure out where it is in the heart and where it needs to go.

For the demo, the team performed a highly technically demanding procedure known as paravalvular aortic leak closure, which repairs replacement heart valves that have begun leaking around the edges. (The team constructed its own valves for the experiments.) Once the robotic catheter reached the leak location, an experienced cardiac surgeon took control and inserted a plug to close the leak.

In repeated trials, the robotic catheter successfully navigated to heart valve leaks in roughly the same amount of time as the surgeon (using either a hand tool or a joystick-controlled robot).

Biologically inspired navigation

Through a navigational technique called “wall following,” the robotic catheter’s optical touch sensor sampled its environment at regular intervals, in much the way insects’ antennae or the whiskers of rodents sample their surroundings to build mental maps of unfamiliar, dark environments. The sensor told the catheter whether it was touching blood, the heart wall or a valve (through images from a tip-mounted camera) and how hard it was pressing (to keep it from damaging the beating heart).

Data from preoperative imaging and machine learning algorithms helped the catheter interpret visual features. In this way, the robotic catheter advanced by itself from the base of the heart, along the wall of the left ventricle and around the leaky valve until it reached the location of the leak.

“The algorithms help the catheter figure out what type of tissue it’s touching, where it is in the heart, and how it should choose its next motion to get where we want it to go,” Dupont explained.

Though the autonomous robot took a bit longer than the surgeon to reach the leaky valve, its wall-following technique meant that it took the longest path.

“The navigation time was statistically equivalent for all, which we think is pretty impressive given that you’re inside the blood-filled beating heart and trying to reach a millimeter-scale target on a specific valve,” said Dupont.

He added that the robot’s ability to visualize and sense its environment could eliminate the need for fluoroscopic imaging, which is typically used in this operation and exposes patients to ionizing radiation.

Robot ercutaneous access to the heart, from Pediatric Cardiac Bioengineering Lab

Robotic catheter enters internal jugular vein and navigates through the vasculature into the right atrium. Source: Pediatric Cardiac Bioengineering Lab

A vision of the future?

Dupont said the project was the most challenging of his career. While the cardiac surgical fellow, who performed the operations on swine, was able to relax while the robot found the valve leaks, the project was taxing for Dupont’s engineering fellows, who sometimes had to reprogram the robot mid-operation as they perfected the technology.

“I remember times when the engineers on our team walked out of the OR completely exhausted, but we managed to pull it off,” said Dupont. “Now that we’ve demonstrated autonomous navigation, much more is possible.”

Some cardiac interventionalists who are aware of Dupont’s work envision using robots for more than navigation, performing routine heart-mapping tasks, for example. Some envision this technology providing guidance during particularly difficult or unusual cases or assisting in operations in parts of the world that lack highly experienced surgeons.

As the U.S. Food and Drug Administration begins to develop a regulatory framework for AI-enabled devices, Dupont said that autonomous surgical robots all over the world could pool their data to continuously improve performance over time — much like self-driving vehicles in the field send their data back to Tesla to refine its algorithms.

“This would not only level the playing field, it would raise it,” said Dupont. “Every clinician in the world would be operating at a level of skill and experience equivalent to the best in their field. This has always been the promise of medical robots. Autonomy may be what gets us there.”

Boston Children's Hospital

Boston Children’s Hospital in the Longwood Medical Area. Photo by Jenna Lang.

About the paper

Georgios Fagogenis, PhD, of Boston Children’s Hospital was first author on the paper. Coauthors were Margherita Mencattelli, PhD, Zurab Machaidze, MD, Karl Price, MaSC, Viktoria Weixler, MD, Mossab Saeed, MB, BS, and John Mayer, MD of Boston Children’s Hospital; Benoit Rosa, PhD, of ICube, Universite? de Strasbourg (Strasbourg, France); and Fei-Yi Wu, MD, of Taipei Veterans General Hospital, Taipei, Taiwan. For more on the technology, contact TIDO@childrenshospital.org.

The study was funded by the National Institutes of Health (R01HL124020), with partial support from the ANR/Investissement d’avenir program. Dupont and several of his coauthors are inventors on U.S. patent application held by Boston Children’s Hospital that covers the optical imaging technique.

About Boston Children’s Hospital

Boston Children’s Hospital, the primary pediatric teaching affiliate of Harvard Medical School, said it is home to the world’s largest research enterprise based at a pediatric medical center. Its discoveries have benefited both children and adults since 1869. Today, more than 3,000 scientists, including 8 members of the National Academy of Sciences, 18 members of the National Academy of Medicine and 12 Howard Hughes Medical Investigators comprise Boston Children’s research community.

Founded as a 20-bed hospital for children, Boston Children’s is now a 415-bed comprehensive center for pediatric and adolescent health care. For more, visit the Vector and Thriving blogs and follow it on social media @BostonChildrens@BCH_Innovation, Facebook and YouTube.

Neural network helps autonomous car learn to handle the unknown


Autonomous Vehicles

Shelley, Stanford’s autonomous Audi TTS, performs at Thunderhill Raceway Park. (Credit: Kurt Hickman)

Researchers at Stanford University have developed a new way of controlling autonomous cars that integrates prior driving experiences – a system that will help the cars perform more safely in extreme and unknown circumstances. Tested at the limits of friction on a racetrack using Niki, Stanford’s autonomous Volkswagen GTI, and Shelley, Stanford’s autonomous Audi TTS, the system performed about as well as an existing autonomous control system and an experienced racecar driver.

“Our work is motivated by safety, and we want autonomous vehicles to work in many scenarios, from normal driving on high-friction asphalt to fast, low-friction driving in ice and snow,” said Nathan Spielberg, a graduate student in mechanical engineering at Stanford and lead author of the paper about this research, published March 27 in Science Robotics. “We want our algorithms to be as good as the best skilled drivers—and, hopefully, better.”

While current autonomous cars might rely on in-the-moment evaluations of their environment, the control system these researchers designed incorporates data from recent maneuvers and past driving experiences – including trips Niki took around an icy test track near the Arctic Circle. Its ability to learn from the past could prove particularly powerful, given the abundance of autonomous car data researchers are producing in the process of developing these vehicles.

Physics and learning with a neural network

Control systems for autonomous cars need access to information about the available road-tire friction. This information dictates the limits of how hard the car can brake, accelerate and steer in order to stay on the road in critical emergency scenarios. If engineers want to safely push an autonomous car to its limits, such as having it plan an emergency maneuver on ice, they have to provide it with details, like the road-tire friction, in advance. This is difficult in the real world where friction is variable and often is difficult to predict.

To develop a more flexible, responsive control system, the researchers built a neural network that integrates data from past driving experiences at Thunderhill Raceway in Willows, California, and a winter test facility with foundational knowledge provided by 200,000 physics-based trajectories.

This video above shows the neural network controller implemented on an automated autonomous Volkswagen GTI tested at the limits of handling (the ability of a vehicle to maneuver a track or road without skidding out of control) at Thunderhill Raceway.

“With the techniques available today, you often have to choose between data-driven methods and approaches grounded in fundamental physics,” said J. Christian Gerdes, professor of mechanical engineering and senior author of the paper. “We think the path forward is to blend these approaches in order to harness their individual strengths. Physics can provide insight into structuring and validating neural network models that, in turn, can leverage massive amounts of data.”

The group ran comparison tests for their new system at Thunderhill Raceway. First, Shelley sped around controlled by the physics-based autonomous system, pre-loaded with set information about the course and conditions. When compared on the same course during 10 consecutive trials, Shelley and a skilled amateur driver generated comparable lap times. Then, the researchers loaded Niki with their new neural network system. The car performed similarly running both the learned and physics-based systems, even though the neural network lacked explicit information about road friction.

In simulated tests, the neural network system outperformed the physics-based system in both high-friction and low-friction scenarios. It did particularly well in scenarios that mixed those two conditions.

Simple feedforward-feedback control structure used for path tracking on an automated vehicle. (Credit: Stanford University)

An abundance of data

The results were encouraging, but the researchers stress that their neural network system does not perform well in conditions outside the ones it has experienced. They say as autonomous cars generate additional data to train their network, the cars should be able to handle a wider range of conditions.

“With so many self-driving cars on the roads and in development, there is an abundance of data being generated from all kinds of driving scenarios,” Spielberg said. “We wanted to build a neural network because there should be some way to make use of that data. If we can develop vehicles that have seen thousands of times more interactions than we have, we can hopefully make them safer.”

Editor’s Note: This article was republished from Stanford University.

The post Neural network helps autonomous car learn to handle the unknown appeared first on The Robot Report.

Robotics investments recap: March 2019

CloudMinds was among the robotics companies receiving funding in March 2019.

CloudMinds was among the robotics companies receiving funding in March 2019. Source: CloudMinds

Investments in robots, autonomous vehicles, and related systems totaled at least $1.3 billion in March 2019, down from $4.3 billion in February. On the other hand, automation companies reported $7.8 billion in mergers and acquisitions last month. While that may represent a slowdown, note that many businesses did not specify the amounts involved in their transactions, of which there were at least 58 in March.

Self-driving cars and trucks — including machine learning and sensor technologies — continued to receive significant funding. Although Lyft’s initial public offering was not directly related to autonomous vehicles, it illustrates the investments flowing for transportation.

Other use cases represented in March 2019 included surgical robotics, industrial automation, and service robots. See the table below, which lists amounts in millions of dollars where they were available:

CompanyAmt. (M$)TypeLead investor, partner, acquirerDateTechnology
Airbiquity15investmentDenso Corp., Toyota Motor Corp., Toyota Tsushu Corp.March 12, 2019connected vehicles
AROMA BIT Inc.2.2Series ASony Innovation FundMarch 3, 2019olofactory sensors
AtomRobotSeries B1Y&R CapitalMarch 5, 2019industrial automation
Automata7.4Series AABB March 19, 2019robot arm
Avidbots23.6Series BTrue VenturesMarch 21, 2019commercial floor cleaning
BoranetSeries AGobi PartnersMarch 6, 2019IIoT, machine vision
Broadmann1711Series AOurCrowdMarch 6, 2019deep learning, autonomous vehicles
Cloudminds300investmentSoftBank Vision FundMarch 26, 2019service robots
Corindus4.8private placementMarch 12, 2019surgical robot
Determined AI11Series AGV (Google Ventures)March 13, 2019AI, deep learning
Emergen Group29Series BQiming Venture PartnersMarch 13, 2019industrial automation
Fabu Technologypre-Series AQingsong FundMarch 1, 2019autonomous vehicles
FortnarecapitalizationThomas H. Lee PArtners LPMarch 27, 2019materlais handling
ForwardX14.95Series BHupang Licheng FundMarch 21, 2019autonomous mobile robots
Gaussian Robotics14.9Series BGrand Flight InvestmentMarch 20, 2019cleaning
Hangzhou Guochen Robot Technology15Series AHongcheng Capital, Yingshi Fund (YS Investment) March 13, 2019robotics R&D
Hangzhou Jimu Technology Co.Series BFlyfot VenturesMarch 6, 2019autonomous vehicles
InnerSpace3.2seedBDC Capital's Women in Technology FundMarch 26, 2019IoT
Innoviz Technologies132Series CChina Merchants Capital, Shenzhen Capital Group, New Alliance CapitalMarch 26, 2019lidar
Intelligent MarkinginvestmentBenjamin CapitalMarch 6, 2019autonomous robots for marking sports fields
Kaarta Inc.6.5Series AGreenSoil Building Innovation FundMarch 21, 2019lidar mapping
Kolmostar Inc.10Series AMarch 5, 2019positioning technology
Linear Labs4.5seedScience Inc., Kindred VenturesMarch 26, 2019motors
MELCO Factory Automation Philippines Inc.2.38new divisionMitsubishi Electric Corp.March 12, 2019industrial automation
Monet Technologies4.51joint ventureHonda Motor Co., Hino Motors Ltd., SoftBank Corp., Toyota Motor CorpMarch 28, 2019self-driving cars
Ouster60investmentRunway Growth Capital, Silicon Valley BankMarch 25, 2019lidar
Pickle Robot Co.3.5equity saleMarch 4, 2019loading robot
Preteckt2seedLos Olas Venture CapitalMarch 26, 2019machine learning automotive
Radar16investmentSound Ventures, NTT Docomo Ventures, Align Ventures, Beanstalk Ventures, Colle Capital, Founders Fund Pathfinder, Novel TMTMarch 28, 2019RFID inventory management
Revvo (IntelliTire)4Series ANorwest Venture PartnersMarch 26, 2019smart tires
Shanghai Changren Information Technology14.89Series AMarch 15, 2019Xiaobao healthcare robot
TakeOff Technologies Inc.equity saleMarch 26, 2019grocery robots
TartanSense2seedOmnivore, Blume Ventures, BEENEXTMarch 11, 2019weeding robot
Teraki2.3investmentHorizon Ventures, American Family VenturesMarch 27, 2019AI, automotive electronics
Think Surgical134investmentMarch 11, 2019surgical robot
Titan Medical25IPOMarch 22, 2019surgical robotics
TMiRobSeries B+Shanghai Zhangjiang Torch Venture Capital March 26, 2019hospital robot
TOYO Automation Co.investmentYamaha Motor Co.March 20, 2019actuators
UbtechinvestmentLiangjiang CapitalMarch 6, 2019humanoid
Vintra4.8investmentBonfire Ventures, Vertex Ventures, London Venture PartnersMarch 11, 2019machine vision
Vtrus2.9investmentMarch 8, 2019drone inspection
Weltmeister Motor450Series CBaidu Inc.March 11, 2019self-driving cars

And here are the mergers and acquisitions:

March 2019 robotics acquisitions

CompanyAmt. (M$)AcquirerDateTechnology
Accelerated DynamicsAnimal Dynamics3/8/2019AI, drone swarms
Astori AS4Subsea3/19/2019undersea control systems
BrainlabSmith & Nephew3/12/2019surgical robot
Figure Eight175Appen Ltd.3/10/2019AI, machine learning
Floating Point FXCycloMedia3/7/2019machine vision, 3D modeling
Florida Turbine Technologies60Kratos Defense and Security Solutions3/1/2019drones
Infinity Augmented RealityAlibaba Group Holding Ltd.3/21/2019AR, machine vision
Integrated Device Technology Inc.6700Renesas3/30/2019self-driving vehicle processors
MedineeringBrainlab3/20/2019surgical
Modern Robotics Inc.0.97Boxlight Corp.3/14/2019STEM
OMNI Orthopaedics Inc.Corin Group3/6/2019surgical robotics
OrthoSpace Ltd.220Stryker Corp.3/14/2019surgical robotics
Osiris Therapeutics660Smith & Nephew3/12/2019surgical robotics
Restoration Robotics Inc.21Venus Concept Ltd.3/15/2019surgical robotics
Sofar Ocean Technologies7Spoondrift, OpenROV3/28/2019underwater drones, sensors
Torc Robotics Inc.Daimler Trucks and Buses Holding Inc.3/29/2019driverless truck software

Surgical robots make the cut

One of the largest transactions reported in March 2019 was Smith & Nephew’s purchase of Osiris Therapeutics for $660 million. However, some Osiris shareholders are suing to block the acquisition because they believe the price that U.K.-based Smith & Nephew is offering is too low. The shareholders’ confidence reflects a hot healthcare robotics space, where capital, consolidation, and chasing new applications are driving factors.

In the meantime, Stryker Corp. bought sports medicine provider OrthoSpace Ltd. for $220 million. The market for sports medicine will experience a compound annual growth rate of 8.9% between now and 2023, predicts Market Research Future.

Freemont, Calif.-based Think Surgical raised $134 million for its robot-assisted orthopedic surgical device, and Titan Medical closed a $25 million public offering last month.

Venus Concept Ltd. merged with hair-implant provider Restoration Robotics for $21 million, and Shanghai Changren Information Technology raised Series A funding of $14.89 million for its Xiaobao healthcare robot.

Corindus Vascular Robotics Inc. added $5 million to the $15 million it had raised the month before. Brainlab acquired Medineering and was itself acquired by Smith & Nephew.

Driving toward automation in March 2019

Aside from Lyft, the biggest reported transportation robotics transaction in March 2019 was Renesas’ completion of its $6.7 billion purchase of Integrated Device Technology Inc. for its self-driving car chips.

The next biggest deal was Weltmeister Motor’s $450 million Series C, in which Baidu Inc. participated.

Lidar also got some support, with Innoviz Technologies raising $132 million in a Series C round, and Ouster raising $60 million. In a prime example of how driverless technology is “paying a peace dividend” to other applications, Google parent Alphabet’s Waymo unit offered its custom lidar sensors to robotics, security, and agricultural companies.

Automakers recognize the need for 3-D modeling, sensors, and software for autonomous vehicles to navigate safely and accurately. A Daimler unit acquired Torc Robotics Inc., which is working on driverless trucks, and CycloMedia acquired machine vision firm Floating Point FX. The amounts were not specified.

Speaking of machine learning, Appen Ltd. acquired dataset annotation company Figure Eight for $175 million, with an possible $125 million more based on 2019 performance. Denso Corp. and Toyota Motor Corp. contributed $15 million to Airbiquity, which is working on connected vehicles.

Service robots clean up

From retail to cleaning and customer service, the combination of improving human-machine interactions, ongoing staffing turnover and shortages, and companies with round-the-clock operations has contributed to investor interest.

The SoftBank Vision Fund participated in a $300 million round for CloudMinds. The Chinese AI and robotics company’s XR-1 is a humanoid service robot, and it also makes security robots and connects robots to the cloud.

According to its filing with the U.S. Securities and Exchange Commission, TakeOff Technologies Inc. raised an unspecified amount for its grocery robots, an area that many observers expect to grow as consumers become more accustomed to getting home deliveries.

On the cleaning side, Avidbots raised $23.6 million in Series B, led by True Ventures. Gaussian Robotics’ Series B was $14.9 million, with participation from Grand Flight Investment.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Wrapping up Q1 2019

China’s efforts to develop its domestic robotics industry continued, as Emergen Group’s $29 million Series B round was the largest reported investment in industrial automation last month.

Hangzhou Guochen Robot Technology raised $15 million in Series A funding for robotics research and development and integration.

That was followed by ABB’s participation in Series A funding of $7.4 million for Automata, which makes a small collaborative robot arm named Ava. Mitsubishi Electric Corp. said it’s spending $2.38 million to set up a new company, MELCO Factory Automation Philippines Inc., because it expects to grow its business there to $30 million by 2026.

Data startup Spopondrift and underwater drone maker OpenROV merged to form Sofar Ocean Technologies. The new San Francisco company also announced a Series A round of $7 million. Also, 4Subsea acquired underwater control systems maker Astori AS.

In the aerial drone space, Kratos Defense and Security Solutions acquired Florida Turbine Technologies for $60 million, and Vtrus raised $2.9 million for commercializing drone inspections. Kaarta Inc., which makes a lidar for indoor mapping, raised $6.5 million.

The Robot Report broke the news of Aria Insights, formerly known as CyPhy Works, shutting down in March 2019.


Editors Note: What defines robotics investments? The answer to this simple question is central in any attempt to quantify robotics investments with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and Investing
Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and Intelligent Systems Companies
Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, think, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification
Funding information is collected from a number of public and private sources. These include press releases from corporations and investment groups, corporate briefings, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded.

The post Robotics investments recap: March 2019 appeared first on The Robot Report.

Ultra-low power hybrid chips make small robots smarter


Robotics Summit & Expo 2019 logoKeynotes | Agenda | Speakers | Exhibitors | Register

An ultra-low power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC) – which operates on milliwatts of power – could help intelligent swarm robots operate for hours instead of minutes.

To conserve power, the chips use a hybrid digital-analog time-domain processor in which the pulse-width of signals encodes information. The neural network IC accommodates both model-based programming and collaborative reinforcement learning, potentially providing the small robots larger capabilities for reconnaissance, search-and-rescue and other missions.

“We are trying to bring intelligence to these very small robots so they can learn about their environment and move around autonomously, without infrastructure,” said Arijit Raychowdhury, associate professor in Georgia Tech’s School of Electrical and Computer Engineering. “To accomplish that, we want to bring low-power circuit concepts to these very small devices so they can make decisions on their own. There is a huge demand for very small, but capable robots that do not require infrastructure.”

The cars demonstrated by Raychowdhury and graduate students Ningyuan Cao, Muya Chang and Anupam Golder navigate through an arena floored by rubber pads and surrounded by cardboard block walls. As they search for a target, the robots must avoid traffic cones and each other, learning from the environment as they go and continuously communicating with each other.

The cars use inertial and ultrasound sensors to determine their location and detect objects around them. Information from the sensors goes to the hybrid ASIC, which serves as the “brain” of the vehicles. Instructions then go to a Raspberry Pi controller, which sends instructions to the electric motors.

In palm-sized robots, three major systems consume power: the motors and controllers used to drive and steer the wheels, the processor, and the sensing system. In the cars built by Raychowdhury’s team, the low-power ASIC means that the motors consume the bulk of the power. “We have been able to push the compute power down to a level where the budget is dominated by the needs of the motors,” he said.

The team is working with collaborators on motors that use micro-electromechanical (MEMS) technology able to operate with much less power than conventional motors.

“We would want to build a system in which sensing power, communications and computer power, and actuation are at about the same level, on the order of hundreds of milliwatts,” said Raychowdhury, who is the ON Semiconductor Associate Professor in the School of Electrical and Computer Engineering. “If we can build these palm-sized robots with efficient motors and controllers, we should be able to provide runtimes of several hours on a couple of AA batteries. We now have a good idea what kind of computing platforms we need to deliver this, but we still need the other components to catch up.”

In time domain computing, information is carried on two different voltages, encoded in the width of the pulses. That gives the circuits the energy-efficiency advantages of analog circuits with the robustness of digital devices.

“The size of the chip is reduced by half, and the power consumption is one-third what a traditional digital chip would need,” said Raychowdhury. “We used several techniques in both logic and memory designs for reducing power consumption to the milliwatt range while meeting target performance.”

With each pulse-width representing a different value, the system is slower than digital or analog devices, but Raychowdhury says the speed is sufficient for the small robots. (A milliwatt is a thousandth of a watt).

A robotic car controlled by an ultra-low power hybrid chip showed its ability to learn and collaborate with other robots. (Photo: Allison Carter/Georgia Tech)

“For these control systems, we don’t need circuits that operate at multiple gigahertz because the devices aren’t moving that quickly,” he said. “We are sacrificing a little performance to get extreme power efficiencies. Even if the compute operates at 10 or 100 megahertz, that will be enough for our target applications.”

The 65-nanometer CMOS chips accommodate both kinds of learning appropriate for a robot. The system can be programmed to follow model-based algorithms, and it can learn from its environment using a reinforcement system that encourages better and better performance over time – much like a child who learns to walk by bumping into things.

“You start the system out with a predetermined set of weights in the neural network so the robot can start from a good place and not crash immediately or give erroneous information,” Raychowdhury said. “When you deploy it in a new location, the environment will have some structures that it will recognize and some that the system will have to learn. The system will then make decisions on its own, and it will gauge the effectiveness of each decision to optimize its motion.”

Communication between the robots allow them to collaborate to seek a target.

“In a collaborative environment, the robot not only needs to understand what it is doing, but also what others in the same group are doing,” he said. “They will be working to maximize the total reward of the group as opposed to the reward of the individual.”

With their ISSCC demonstration providing a proof-of-concept, the team is continuing to optimize designs and is working on a system-on-chip to integrate the computation and control circuitry.

“We want to enable more and more functionality in these small robots,” Raychowdhury added. “We have shown what is possible, and what we have done will now need to be augmented by other innovations.”

Editor’s Note: This article was republished from Georgia Tech Research Horizons.

The post Ultra-low power hybrid chips make small robots smarter appeared first on The Robot Report.