The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.
The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.
The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.
Join community workshops
Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:
Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)
Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:
Name, affiliation, and e-mail address
A position statement (1.5 pages max)
Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.
White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.
Roadmap revision timeline
The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:
August 2019: Call for white papers
September – November 2019: Workshops
December 2019: Workshops reports finalized
January 2020: Synthesis meeting at UC San Diego
February 2020: Publish draft roadmap for community feedback
April 2020: Revision of roadmap based on community feedback
May 2020: Finalize roadmap with graphics design
July 2020: Publish roadmap
If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.
Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.
Keven Walgamott had a good “feeling” about picking up the egg without crushing it. What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of LUKE, a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.
That’s because the team, led by University of Utah biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (named after the robotic hand that Luke Skywalker got in The Empire Strikes Back) to mimic the way a human hand feels objects by sending the appropriate signals to the brain.
Their findings were published in a new paper co-authored by University of Utah biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark, and other colleagues in the latest edition of the journal Science Robotics.
Sending the right messages
“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”
That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up, and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.
“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”
Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the University of Utah, was able to pluck grapes without crushing them, pick up an egg without cracking it, and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.
“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”
How those things are accomplished is through a complex series of mathematical calculations and modeling.
Kevin . Walgamott wears the LUKE prosthetic arm. Credit: University of Utah Center for Neural Interfaces
The LUKE Arm
The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.
Meanwhile, the University of Utah team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by University of Utah biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array.
The Array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.
But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.
First, the prosthetic arm has sensors in its hand that send signals to the nerves via the Array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.
“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.
To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.
Future research
In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.
Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.
Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.
The research involves a number of institutions including the University of Utah’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering, and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.
“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”
Editor’s note: Reposted from the University of Utah.
Among the challenges for developers of mobile manipulation and humanoid robots is the need for an affordable and flexible research platform. PAL Robotics last month announced its TIAGo++, a robot that includes two arms with seven degrees of freedom each.
As with PAL Robotics‘ one-armed TIAGo, the new model is based on the Robot Operating System (ROS) and can be expanded with additional sensors and end effectors. TIAGo++ is intended to enable engineers to create applications that include a touchscreen interface for human-robot interaction (HRI) and require simultaneous perception, bilateral manipulation, mobility, and artificial intelligence.
Jordi Pagès, product manager of the TIAGo robot at PAL Robotics responded to the following questions about TIAGo++ from The Robot Report:
For the development of TIAGo++, how did you collect feedback from the robotics community?
Pagès: PAL Robotics has a long history in research and development. We have been creating service robotics platforms since 2004. When we started thinking about the TIAGo robot development, we asked researchers from academia and industry which features would they expect or value in a platform for research.
Our goal with TIAGo has always been the same: to deliver a robust platform for research that easily adapts to diverse robotics projects and use cases. That’s why it was key to be in touch with the robotics and AI developers from start.
After delivering the robots, we usually ask for feedback and stay in touch with the research centers to learn about their activities and experiences, and the possible improvements or suggestions they would have. We do the same with the teams that use TIAGo for competitions like RoboCup or the European Robotics League [ERL].
At the same time, TIAGo is used in diverse European-funded projects where end users from different sectors, from healthcare to industry, are involved. This allows us to also learn from their feedback and keep finding new ways in which the platform could be of help in a user-centered way. That’s how we knew that adding a second arm into the TIAGo portfolio of its modular possibilities could be of help to the robotics community.
How long did it take PAL Robotics to develop the two-armed TIAGo++ in comparison with the original model?
Pagès: Our TIAGo platform is very modular and robust, so it took us just few months from taking the decision to having a working TIAGo++ ready to go. The modularity of all our robots and our wide experience developing humanoids usually helps us a lot in reducing the redesign and production time.
The software is also very modular, with extensive use of ROS, the de facto standard robotics middleware. Our customers are able to upgrade, modify, and substitute ROS packages. That way, they can focus their attention on their real research on perception, navigation, manipulation, HRI, and AI.
How high can TIAGo++ go, and what’s its reach?
Pagès: TIAGo++ can reach the floor and up to 1.75m [5.74 ft.] high with each arm, thanks to the combination of its 7 DoF [seven degrees of freedom] arms and its lifting torso. The maximum extension of each arm is 92cm [36.2 in.]. In our experience, this workspace allows TIAGo to work in several environments like domestic, healthcare, and industry.
The TIAGo can extend in height, and each arm has a reach of about 3 ft. Source: PAL Robotics
What’s the advantage of seven degrees of freedom for TIAGo’s arms over six degrees?
Pagès: A 7-DoF arm is much better in this sense for people who will be doing manipulation tasks. Adding more DoFs means that the robot can arrive to more poses — positions and orientations — of its arm and end-effector that it couldn’t reach before.
Also, this enables developers to reduce singularities, avoiding non-desired abrupt movements. This means that TIAGo has more possibilities to move its arm and reach a certain pose in space, with a more optimal combination of movements.
What sensors and motors are in the robot? Are they off-the-shelf or custom?
Pagès: All our mobile-based platforms, like the TIAGo robot, combine many sensors. TIAGo has a laser and sonars to move around and localize itself in space, an IMU [inertial measurement unit], and an RGB-D camera in the head. It can have a force/torque sensor on the wrist, especially useful to work in HRI scenarios. It also has a microphone and a speaker.
TIAGo has current sensing in every joint of the arm, enabling a very soft, effortless torque control on each of the arms. The possibility of having an expansion panel with diverse connectors makes it really easy for developers to add even more sensors to it, like a thermal camera or a gripper camera, once they have TIAGo in their labs.
About the motors, TIAGo++ makes use our custom joints integrating high-quality commercial components and our own electronic power management and control. All motors also have encoders to measure the current motor position.
What’s the biggest challenge that a humanoid like TIAGo++ can help with?
Pagès: TIAGo++ can help with are those tasks that require bi-manipulation, in combination with navigation, perception, HRI, or AI. Even though it is true that a one-arm robot can already perform a wide range of tasks, there are many actions in our daily life that require of two arms, or that are more comfortably or quickly done with two arms rather than one.
For example, two arms are good for grasping and carrying a box, carrying a platter, serving liquids, opening a bottle or a jar, folding clothes, or opening a wardrobe while holding an object. In the end, our world and tools have been designed for the average human body, which is with two arms, so TIAGo++ can adapt to that.
As a research platform based on ROS, is there anything that isn’t open-source? Are navigation and manipulation built in or modular?
Pagès: Most software is provided either open-sourced or with headers and dynamic libraries so that customers can develop applications making use of the given APIs or using the corresponding ROS interfaces at runtime.
For example, all the controllers in TIAGo++ are plugins of ros_control, so customers can implement their own controllers following our public tutorials and deploy them on the real robot or in the simulation.
Moreover, users can replace any ROS package by their own packages. This approach is very modular, and even if we provide navigation and manipulation built-in, developers can use their own navigation and manipulation instead of ours.
Did PAL work with NVIDIA on design and interoperability, or is that an example of the flexibility of ROS?
Pagès: It is both an example of how easy is to expand TIAGo with external devices and how easy is to integrate in ROS these devices.
One example of applications that our clients have developed using the NVIDIA Jetson TX2 is the “Bring me a beer” task from the Homer Team [at RoboCup], at the University of Koblenz-Landau. They made a complete application in which TIAGo robot could understand a natural language request, navigate autonomously to the kitchen, open the fridge, recognize and select the requested beer, grasp it, and deliver it back to the person who asked for it.
As a company, we work with multiple partners, but we also believe that our users should be able to have a flexible platform that allows them to easily integrate off-the-shelf solutions they already have.
How much software support is there for human-machine interaction via a touchscreen?
Pagès: The idea behind integrating a touchscreen on TIAGo++ is to bring customers the possibility to implement their own graphical interface, so we provide full access to the device. We work intensively with researchers, and we provide platforms as open as our customers need, such as a haptic interface.
What do robotics developers need to know about safety and security?
Pagès: A list of safety measures and best practices are provided in the Handbook of TIAGo robot in order that customers ensure safety both around the robot and for the robot itself.
TIAGo also features some implicit control modes that help to ensure safety while operation. For example, an effort control mode for the arms is provided so that collisions can be detected and the arm can be set in gravity compensation mode.
Furthermore, the wrist can include a six-axis force/torque sensor providing more accurate feedback about collisions or interactions of the end effector with the environment. This sensor can be also used to increase the safety of the robot. We provide this information to our customers and developers so they are always aware about the safety measures.
Have any TIAGo users moved toward commercialization based on what they’ve learned with PAL’s systems?
Pagès: At the moment, from the TIAGo family, we commercialize the TIAGo Base for intralogistics automation in indoor spaces such as factories or warehouses.
Some configurations of the TIAGo robot have been tested in pilots in healthcare applications. In the EnrichMe H2020 EU Project, the robot gave assistance to old people at home autonomously for up to approximately two months.
In robotics competitions such as the ERL, teams have shown the quite outstanding performance of TIAGo in accomplishing specific actions in a domestic environment. Two teams ended first and third in the RoboCup@Home OPL 2019 in Sydney, Australia. The Homer Team won for the third time in a row using TIAGo — see it clean a toilet here.
The CATIE Robotics Team ended up third in the first world championship in which it participated. For instance, in one task, it took out the trash.
The TIAGo robot is also used for European Union Horizon 2020 experiments in which collaborative robots that combine mobility with manipulation are used in industrial scenarios. This includes projects such as MEMMO for motion generation, Co4Robots for coordination, and RobMoSys for open-source software development.
Besides this research aspect, we have industrial customers that are using TIAGo to improve their manufacturing procedures.
Pagès: With TIAGo++, besides the platform itself, you also get support, extra advanced software solutions, and assessment from a company that continues to be in the robotics sector since more than 15 years ago. Robots like the TIAGo++ also use our know-how both in software and hardware, a knowledge that the team has been gathering from the development of cutting-edge biped humanoids like the torque-controlled TALOS.
From a technical point of view, TIAGo++ was made very compact to suit environments shared with people such as homes. Baxter was a very nice entry-point platform and was not originally designed to be a mobile manipulator but a fixed one. TIAGo++ can use the same navigation used in our commercial autonomous mobile robot for intralogistics tasks, the TIAGo Base.
Besides, TIAGo++ is a fully customizable robot in all aspects: You can select the options you want in hardware and software, so you get the ideal platform you want to have in your robotics lab. For a mobile manipulator with two 7-DoF arms, force/torque sensors, ROS-based, affordable, and with community support, we believe TIAGo++ should be a very good option.
The TIAGo community is growing around the world, and we are sure that we will see more and more robots helping people in different scenarios very soon.
What’s the price point for TIAGo++?
Pagès: The starting price is around €90,000 [$100,370 U.S.]. It really depends on the configuration, devices, computer power, sensors, and extras that each client can choose for their TIAGo robot, so the price can vary.
The Lucid Robotic System has received FDA clearance. Source: Neural Analytics
LOS ANGELES — Neural Analytics Inc., a medical robotics company developing and commercializing technologies to measure and track brain health, has announced a strategic partnership with NGK Spark Plug Co., a Japan-based company that specializes in comprehensive ceramics processing. Neural Analytics said the partnership will allow it to expand its manufacturing capabilities and global footprint.
Neural Analytics’ Lucid Robotic System (LRS) includes the Lucid M1 Transcranial Doppler Ultrasound System and NeuralBot system. The resulting autonomous robotic transcranial doppler (rTCD) platform is designed to non-invasively search, measure, and display objective brain blood-flow information in real time.
The Los Angeles-based company’s technology integrates ultrasound and robotics to empower clinicians with critical information about brain health to make clinical decisions. Through its algorithm, analytics, and autonomous robotics, Neural Analytics provides valuable information that can identify pathologies such as Patent Foramen Ovale (PFO), a form of right-to-left shunt.
Nagoya, Japan-based NGK Spark Plug claims to be the world’s leading manufacturer of spark plugs and automotive sensors, as well as a broad lineup of packaging, cutting tools, bio ceramics, and industrial ceramics. The company has more than 15,000 employees and develops products related to the environment, energy, next-generation vehicles, and the medical device and diagnostic industries.
Neural Analytics and NGK to provide high-quality parts, global access
“This strategic partnership between Neural Analytics and NGK Spark Plug is built on a shared vision for the future of global healthcare and a foundation of common values,” said Leo Petrossian, Ph.D., co-founder and CEO of Neural Analytics. “We are honored with this opportunity and look forward to learning from our new partners how they have built a great global enterprise,”
NGK Spark Plug has vast manufacturing expertise in ultra-high precision ceramics. With this partnership, both companies said they are committed in working together to build high-quality products at a reasonable cost to allow greater access to technologies like the Lucid Robotic System.
“I am very pleased with this strategic partnership with Neural Analytics,” said Toru Matsui, executive vice president of NGK Spark Plug. “This, combined with a shared vision, is an exciting opportunity for both companies. This alliance enables the acceleration of their great technology to the greater market.”
This follows Neural Analytics’ May announcement of its Series C round close, led by Alpha Edison. In total, the company has raised approximately $70 million in funding to date.
Neural Analytics said it remains “committed to advancing brain healthcare through transformative technology to empower clinicians with the critical information needed to make clinical decisions and improve patient outcomes.”
Targeting medical treatment to an ailing body part is a practice as old as medicine itself. Drops go into itchy eyes. A broken arm goes into a cast. But often what ails us is inside the body and is not so easy to reach. In such cases, a treatment like surgery or chemotherapy might be called for. A pair of researchers in Caltech’s Division of Engineering and Applied Science are working on an entirely new form of treatment — microrobots that can deliver drugs to specific spots inside the body while being monitored and controlled from outside the body.
“The microrobot concept is really cool because you can get micromachinery right to where you need it,” said Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering at the California Institute of Technology. “It could be drug delivery, or a predesigned microsurgery.”
The microrobots are a joint research project of Wang and Wei Gao, assistant professor of medical engineering, and are intended for treating tumors in the digestive tract.
Developing jet-powered microrobots
The microrobots consist of microscopic spheres of magnesium metal coated with thin layers of gold and parylene, a polymer that resists digestion. The layers leave a circular portion of the sphere uncovered, kind of like a porthole. The uncovered portion of the magnesium reacts with the fluids in the digestive tract, generating small bubbles. The stream of bubbles acts like a jet and propels the sphere forward until it collides with nearby tissue.
On their own, magnesium spherical microrobots that can zoom around might be interesting, but they are not especially useful. To turn them from a novelty into a vehicle for delivering medication, Wang and Gao made some modifications to them.
First, a layer of medication is sandwiched between an individual microsphere and its parylene coat. Then, to protect the microrobots from the harsh environment of the stomach, they are enveloped in microcapsules made of paraffin wax.
Laser-guided delivery
At this stage, the spheres are capable of carrying drugs, but still lack the crucial ability to deliver them to a desired location. For that, Wang and Gao use photoacoustic computed tomography (PACT), a technique developed by Wang that uses pulses of infrared laser light.
The infrared laser light diffuses through tissues and is absorbed by oxygen-carrying hemoglobin molecules in red blood cells, causing the molecules to vibrate ultrasonically. Those ultrasonic vibrations are picked up by sensors pressed against the skin. The data from those sensors is used to create images of the internal structures of the body.
Previously, Wang has shown that variations of PACT can be used to identify breast tumors, or even individual cancer cells. With respect to the microrobots, the technique has two jobs. The first is imaging. By using PACT, the researchers can find tumors in the digestive tract and also track the location of the microrobots, which show up strongly in the PACT images.
Microrobots activated by lasers and powered by magnesium jets could deliver medicine within the human body. Source: Caltech
Once the microrobots arrive in the vicinity of the tumor, a high-power continuous-wave near-infrared laser beam is used to activate them. Because the microrobots absorb the infrared light so strongly, they briefly heat up, melting the wax capsule surrounding them, and exposing them to digestive fluids.
At that point, the microrobots’ bubble jets activate, and the microrobots begin swarming. The jets are not steerable, so the technique is sort of a shotgun approach — the microrobots will not all hit the targeted area, but many will. When they do, they stick to the surface and begin releasing their medication payload.
“These micromotors can penetrate the mucus of the digestive tract and stay there for a long time. This improves medicine delivery,” Gao says. “But because they’re made of magnesium, they’re biocompatible and biodegradable.”
Pushing the concept
Tests in animal models show that the microrobots perform as intended, but Gao and Wang say they are planning to continue pushing the research forward.
“We demonstrated the concept that you can reach the diseased area and activate the microrobots,” Gao says. “The next step is evaluating the therapeutic effect of them.”
Gao also says he would like to develop variations of the microrobots that can operate in other parts of the body, and with different types of propulsion systems.
Wang says his goal is to improve how his PACT system interacts with the microrobots. The infrared laser light it uses has some difficulty reaching into deeper parts of the body, but he says it should be possible to develop a system that can penetrate further.
The paper describing the microrobot research, titled, “A microrobotic system guided by photoacoustic tomography for targeted navigation in intestines in vivo,” appears in the July 24 issue of Science Robotics. Other co-authors include Zhiguang Wu, Lei Li, Yiran Yang (MS ’18), Yang Li, and So-Yoon Yang of Caltech; and Peng Hu of Washington University in St. Louis. Funding for the research was provided by the National Institutes of Health and Caltech’s Donna and Benjamin M. Rosen Bioengineering Center.
Editor’s note: This article republished from the California Institute of Technology.
One of the barriers to more widespread development and adoption of exoskeletons for industrial, medical, and military use has been a lack of standards. ASTM International this month proposed a guide to provide standardized tools to assess and improve the usability and usefulness of exoskeletons and exosuits.
“Exoskeletons and exosuits can open up a world of possibilities, from helping workers perform industrial tasks while not getting overstressed, to helping stroke victims learning to walk again, to helping soldiers carry heavier rucksacks longer distances,” said Kevin Purcell, an ergonomist at the U.S. Army Public Health Center’s Aberdeen Proving Ground. “But if it doesn’t help you perform your task and/or it’s hard to use, it won’t get used.”
He added that the guide will incorporate ways to understand the attributes of exoskeletons, as well as observation methods and questionnaires to help assess an exoskeleton’s performance and safety.
“The biggest challenge in creating this standard is that exoskeletons change greatly depending on the task the exoskeleton is designed to help,” said Purcell. “For instance, an industrial exoskeleton is a totally different design from one used for medical rehabilitation. The proposed standard will need to cover all types and industries.”
According to Purcell, industrial, medical rehabilitation, and defense users will benefit most from the proposed standard, as will exoskeleton manufacturers and regulatory bodies.
The F48 committee of ASTM International, previously known as he American Society for Testing and Materials, was formed in 2017. It is currently working on the proposed exoskeleton and exosuit standard, WK68719. Six subcommittees include about 150 members, including startups, government agencies, and enterprises such as Boeing and BMW.
ASTM publishes first standards
In May, ASTM International published its first two standards documents, which are intended to provide consensus terminology (F3323) and set forth basic labeling and other informational requirements (F3358). The standards are available for purchase.
“Exoskeletons embody the technological promise of empowering humans to be all they can be,” said F48 committee member William Billotte, a physical scientist at the U.S. National Institute of Standards and Technology (NIST). “We want to make sure that labels and product information are clear, so that exoskeletons fit people properly, so that they function safely and effectively, and so that people can get the most from these innovative products.”
The committee is working on several proposed standards and welcomes more participation from members of the exoskeleton community. For example, Billotte noted that the committee seeks experts in cybersecurity due to the growing need to secure data, controls, and biometrics in many exoskeletons.
An exoskeleton vest at a BMW plant in in Spartanburg, S.C. Source: BMW
Call for an exoskeleton center of excellence
Last month, ASTM International called for proposals for an “Exo Technologies Center of Excellence.” The winner would receive up to $250,000 per year for up to five years. Full proposals are due today, and the winner will be announced in September, said ASTM.
“Now is the right time to create a hub of collaboration among startups, companies, and other entities that are exploring how exoskeletons could support factory workers, patients, the military, and many other people,” stated ASTM International President Katharine Morgan. “We look forward to this new center serving as a catalyst for game-changing R&D, standardization, related training, partnerships, and other efforts that help the world benefit from this exciting new technology.”
The center of excellence is intended to fill knowledge gaps, provide a global hub for education and a neutral forum to discuss common challenges, and provide a library of community resources. It should also coordinate global links among stakeholders, said ASTM.
West Conshohocken, Pa.-based ASTM International said it meets World Trade Organization (WTO) principles for developing international standards. The organization’s standards are used globally in research and development, product testing, quality systems, commercial transactions, and more.
Minimally invasive surgery (MIS) is a modern technique that allows surgeons to perform operations through small incisions (usually 5-15 mm). Although it has numerous advantages over older surgical techniques, MIS can be more difficult to perform. Some inherent drawbacks are:
Limited motion due to straight laparoscopic instruments and fixation enforced by the small incision in the abdominal wall
Impaired vision, due the two-dimensional imaging
Usage of long instruments amplifies the effects of surgeon’s tremor
Poor ergonomics imposed to the surgeon
Loss of haptic feedback, which is distorted by friction forces on the instrument and reactionary forces from the abdominal wall.
Minimally Invasive Robotic Surgery (MIRS) offers solutions to either minimize or eliminate many of the pitfalls associated with traditional laparoscopic surgery. MIRS platforms such as Intuitive Surgical’s da Vinci, approved by the U.S. Food and Drug Administration in 2000, represent a historical milestone of surgical treatments. The ability to leverage laparoscopic surgery advantages while augmenting surgeons’ dexterity and visualization and eliminating the ergonomic discomfort of long surgeries, makes MIRS undoubtedly an essential technology for the patient, surgeons and hospitals.
However, despite all improvements brought by currently commercially available MIRS, haptic feedback is still a major limitation reported by robot-assisted surgeons. Because the interventionist no longer manipulates the instrument directly, the natural haptic feedback is eliminated. Haptics is a conjunction of both kinesthetic (form and shape of muscles, tissues and joints) as well as tactile (cutaneous texture and fine detail) perception and is a combination of many physical variables such as force, distributed pressure, temperature and vibration.
Direct benefits of sensing interaction forces at the surgical end-effector are:
Improved organic tissue characterization and manipulation
Assessment of anatomical structures
Reduction of sutures breakage
Overall increase on the feeling of assisted robotics surgery.
Haptic feedback also plays a fundamental role in shortening the learning curve for young surgeons in MIRS training. A tertiary benefit of accurate real-time direct force measurement is that the data collected from these sensors can be utilized to produce accurate tissue and organ models for surgical simulators used in MIS training. Futek Advanced Sensor Technology, an Irvine, Calif.-based sensor manufacturer, shared these tips on how to design and manufacture haptic sensors for surgical robotics platforms.
With a force, torque and pressure sensor enabling haptic feedback to the hands of the surgeon, robotic minimally invasive surgery can be performed with higher accuracy and dexterity while minimizing trauma to the patient. | Credit: Futek
Technical and economic challenges of haptic feedback
Adding to the inherent complexity of measuring haptics, engineers and neuroscientists also face important issues that require consideration prior to the sensor design and manufacturing stages. The location of the sensing element, which significantly influences the measurement consistency, presents MIRS designers with a dilemma: should they place the sensor outside the abdomen wall near the actuation mechanism driving the end-effector (a.k.a. Indirect Force Sensing), or inside the patient at the instrument tip, embedded on the end-effector (a.k.a. Direct Force Sensing).
The pros and cons of these two approaches are associated with measurement accuracy, size restrictions and sterilization and biocompatibility requirements. Table 1 compares these two force measurement methods.
In the MIRS applications, where very delicate instrument-tissue interaction forces need to give precise feedback to the surgeon, measurement accuracy is sine qua non, which makes intra-abdominal direct sensing the ideal option.
However, this novel approach not only brings the design and manufacturing challenges described in Table 1 but also demands higher reusability. Commercially available MIRS systems that are modular in design allow the laparoscopic instrument to be reutilized approximately 12 to 20 times. Adding the sensing element near to the end-effector invariably increases the cost of the instrument and demands further consideration during the design stage in order to enhance sensor reusability.
Appropriate electronic components, strain measurement method and electrical connections have to withstand additional autoclavable cycles as well as survive a high PH washing. Coping with these special design requirements invariably increases the unitary cost per sensor. However, extended lifespan and number of cycles consequently reduces the cost per cycle and brings financial affordability to direct measurement method.
Hermeticity of high precision sub-miniature load sensing elements is equally challenging to intra-abdominal direct force measurement. The conventional approach to sealing electronic components is the adoption of conformal coatings, which are extensively used in submersible devices. As much as this solution provides protection in low-pressure water submersion environments for consumer electronics, coating protection is not sufficiently airtight and is not suitable for high-reliability medical, reusable and sterilizable solutions.
Under extreme process controls, conformal coatings have shown to be marginal and provide upwards of 20 to 30 autoclave cycles. The autoclave sterilization process presents a harsher physicochemical environment using high pressure and high temperature saturated steam. Similar to helium leak detection technology, saturated steam particles are much smaller in size compared to water particles and are capable of penetrating and degrading the coating over time causing the device to fail in a hardly predictable manner.
An alternative and conventional approach to achieving hermeticity is to weld on a header interface to the sensor. Again, welding faces obstacles in miniaturized sensors due to its size constraints. All in all, a novel and robust approach is a monolithic sensor using custom formulated, Ct matched, chemically neutral, high temperature fused isolator technology used to feed electrical conductors through the walls of the hermetically sealed active sensing element. The fused isolator technology has shown reliability in the hundreds to thousands of autoclave cycles.
The Robot Report launched the Healthcare Robotics Engineering Forum (Dec. 9-10 in Santa Clara, Calif.). The conference and expo focuses on improving the design, development and manufacture of next-generation healthcare robots. The Healthcare Robotics Engineering Forum is currently accepting speaking proposals through July 26, 2019. To submit a proposal, fill out this form.
Other design considerations for haptic feedback
As aforementioned, miniaturization, biocompatibility, autoclavability and high reusability are some of the unique characteristics imposed to a haptic sensor by the surgical environment. In addition, it is imperative that designers also meet requirements that are inherent to any high-performance force measurement device.
Extraneous loads (or crosstalk) compensation, provides optimal resistance to off-axis loads to assure maximum operating life and minimize reading errors. Force and torque sensors are engineered to capture forces along the Cartesian axes, typically X, Y and Z. From these three orthogonal axes, one to six measurement channels derives three force channels (Fx, Fy and Fz) and three torque or moment channels (Mx, My and Mz). Theoretically, a load applied along one of the axes should not produce a measurement in any of the other channels, but this is not always the case. For a majority of force sensors, this undesired cross-channel interference will be between 1 and 5% and, considering that one channel can capture extraneous loads from five other channels, the total crosstalk could be as high as 5 to 25%.
In robotic surgery, the sensor must be designed to negate the extraneous or cross-talk loads, which include frictions between the end-effector instrument and trocar, reactionary forces from the abdominal wall and gravitational effect of mass along the instrument axis. In some occasions, miniaturized sensors are very limited in space and have to compensate side loads using alternate methods such as electronic or algorithmic compensation.
Calibration of direct inline force sensor imposes restrictions as well. The calibration fixtures are optimized with SR buttons to direct load precisely through the sensor of the part. If the calibration assembly is not equipped with such arrangements, the final calibration might be affected by parallel load paths.
Thermal effect is also a major challenge in strain measurement. Temperature variations cause material expansion, gage factor coefficient variation and other undesirable effects on the measurement result. For this reason, temperature compensation is paramount to ensure accuracy and long-term stability even when exposed to severe ambient temperature oscillations.
The measures to counteract temperature effects on the readings are:
The use of high-quality, custom and self-compensated strain gages compatible with the thermal expansion coefficient of the sensing element material
Use of half or full Wheatstone bridge circuit configuration installed in both load directions (tension and compression) to correct for temperature drift
Fully internally temperature compensation of zero balance and output range without the necessity of external conditioning circuitry.
In some special cases, the use of custom strain gages with reduced solder connections helps reduce temperature impacts from solder joints. Usually, a regular force sensor with four individual strain gages has upwards of 16 solder joints, while custom strain elements can reduce this down to less than six. This design consideration improves reliability as the solder joint, as an opportunity for failure, is significantly reduced.
During the design phase, it is also imperative to consider such sensors to meet high reliability along with high-volume manufacturability, taking into consideration the equipment and processes that will be required should a device be designated for high-volume manufacturing. The automated, high-volume processes could be slightly or significantly different than the benchtop or prototype equipment used for producing lower volumes. The scalability must maintain focus on reducing failure points during the manufacturing process, along with failure points that could occur on the field.
Testing for medical applications is more related to the ability of a measurement device that can withstand a high number of cycles rather than resist to strenuous structural stress. In particular for medical sensors, the overload and fatigue testing must be performed in conjunction with the sterilization testing in an intercalated process with several cycles of fatigue and sterilization testing. The ability to survive hundreds of overload cycles while maintaining hermeticity translates into a failure-free, high- reliability sensor with lower MTBF and more competitive total cost of ownership.
Credit: Futek
Product development challenges
Although understanding the inherent design challenges of the haptic autoclavable sensor is imperative, the sensor manufacturer must be equipped with a talented multidisciplinary engineering team, in-house manufacturing capabilities supported by fully developed quality processes and product/project management proficiency to handle the complex, resource-limited, and fast-paced new product development environment.
A multidisciplinary approach will result in a sensor element that meets the specifications in terms of nonlinearity, hysteresis, repeatability and cross-talk, as well as an electronic instrument that delivers analog and digital output, high sampling rate and bandwidth, high noise-free resolution and low power consumption, both equally necessary for a reliable turnkey haptics measurement solution.
Strategic control of all manufacturing processes (machining, lamination, wiring, calibration), allows manufacturers to engineer sensors with a design for manufacturability (DFM) mentality. This strategic control of manufacturing boils down to methodically selecting the bill of material, defining the testing plans, complying with standards and protocols and ultimately strategizing the manufacturing phase based on economic constraints.
According to Australian Centre for Robotic Vision Research’s Nicole Robinson, research studies on the impact of social robot interventions there have been few and unsophisticated. There is good news… the results are encouraging.
As our world struggles with mental health and substance use disorders affecting 970 million people and counting (according to 2017 figures), the time is ripe for meaningful social robot ‘interventions’. That’s the call by Australian Centre for Robotic Vision Research Fellow Nicole Robinson – a roboticist with expertise in psychology and health – as detailed in the Journal of Medical Internet Research (JMIR).
Having led Australia’s first study into the positive impact of social robot interventions on eating habits (in 2017), Robinson and the Centre’s social robotics team believes it is time to focus on weightier health and wellbeing issues, including depression, drug and alcohol abuse, and eating disorders.
Global Trials To Date
In the recently published JMIR paper, A Systematic Review of Randomised Controlled Trials on Psychosocial Health Interventions by Social Robots, Robinson reveals global trials to date are ‘very few and unsophisticated’. Only 27 global trials met inclusion criteria for psychosocial health interventions; many of them lacked a follow-up period; targeted small sample groups (<100 participants); and limited research to contexts of child health, autism spectrum disorder (ASD) and older adults.
Of concern, no randomised controlled trials have yet involved adolescents or young adults at a time when the World Health Organisation (WHO) estimates one in six adolescents (aged 10-19) are affected by mental health disorders. According to the agency, half of all mental health conditions start by 14 years of age, but most cases are undetected and untreated.
WHO warns: “The consequences of not addressing adolescent mental health conditions extend to adulthood, impairing both physical and mental health and limiting opportunities to lead fulfilling lives…”
In good news for the Centre’s research into social robot interventions, WHO pushes for the adoption of multi-level and varied prevention and promotion programs including via digital platforms (Read more HERE).
A Therapeutic Alliance
Despite limited amount of global research conducted on psychosocial health interventions by social robots, Robinson believes the results are nevertheless encouraging. They indicate a ‘therapeutic alliance’ between robots and humans could lead to positive effects similar to the use of digital interventions for managing anxiety, depression and alcohol use.
“The beauty of social robot intervention is that they could help to side-step potential negative effects of face-to-face therapy with a human health practitioner such as perceived judgement or stigma,” said Robinson, who has used Nao and SoftBank’s Pepper robots in her research at the Centre.
“Robots can help support a self-guided program or health service by interacting with people to help keep them on track with their health goals.
“Our research is not about replacing healthcare professionals, but identifying treatment gaps where social robots can effectively assist by engaging patients to discuss sensitive topics and identify problems that may require the attention of a health practitioner.”
In the JMIR paper, published last month, Robinson puts out a timely global call for research on social robot interventions to transition from exploratory investigations to large-scale controlled trials with sophisticated methodology.
At the Australian Centre for Robotic Vision’s QUT headquarters, she’s helping to lay the groundwork. The Centre’s research, sponsored by the Queensland Government, is assessing the capabilities of social robots and using SoftBank Robotics’ Pepper robot to explore applications where social robots can deliver value beyond their novelty appeal.
Social Robot Trials
In 2018, the Centre’s social robotics team initiated a set of trials involving Pepper robots to measure the unique value of social robots in one-to-one interactions in healthcare. After supporting an Australia-first trial of a Pepper robot at Townsville Hospital and Health Service, the Centre’s team has placed Pepper into a QUT Health Clinic at Kelvin Grove Campus.
The three-month study to June 2019 involves Pepper delivering a brief health assessment and providing customised feedback that can be taken to a health practitioner to discuss issues around physical activity, dietary intake, alcohol use and smoking. Members of the public who are registered as patients at the QUT Health Clinic are invited to take part in this trial.
In a separate online trial, the Centre’s social robotics team is assessing people’s attitudes to social robots and their willingness to engage with and discuss different topics with a robot or human as the conversation partner.
For more information on the Australian Centre for Robotic Vision’s work creating robots able to see and understand like humans, download our 2018 Annual Report.
Editor’s Note: This article was republished with permission from The Australian Centre for Robotic Vision. The original article can be found HERE.
KIST’s research shows that robots can be intuitively taught to be flexible by humans rather than through numerical calculation or programming the robot’s movements. Credit: KIST
The Center for Intelligent & Interactive Robotics at the Korea Institute of Science and Technology, or KIST, said that a team led by Dr. Kee-hoon Kim has developed a way of teaching “impedance-controlled robots” through human demonstrations. It uses surface electromyograms of muscles and succeeded in teaching a robot to trap a dropped ball like a soccer player.
A surface electromyogram (sEMG) is an electric signal produced during muscle activation that can be picked up on the surface of the skin, said KIST, which is led by Pres. Byung-gwon Lee.
Recently developed impedance-controlled robots have opened up a new era of robotics based on the natural elasticity of human muscles and joints, which conventional rigid robots lack. Robots with flexible joints are expected to be able to run, jump hurdles and play sports like humans. However, the technology required to teach such robots to move in this manner has been unavailable until recently.
KIST uses human muscle signals to teach robots how to move
The KIST research team claimed to be the first in the world to develop a way of teaching new movements to impedance-controlled robots using human muscle signals. With this technology, which detects not only human movements but also muscle contractions through sEMG, it’s possible for robots to imitate movements based on human demonstrations.
Dr. Kee-hoon Kim’s team said it succeeded in using sEMG to teach a robot to quickly and adroitly trap a rapidly falling ball before it comes into contact with a solid surface or bounces too far to reach — similar to the skills employed by soccer players.
SEMG sensors were attached to a man’s arm, allowing him to simultaneously control the location and flexibility of the robot’s rapid upward and downward movements. The man then “taught” the robot how to trap a rapidly falling ball by giving a personal demonstration. After learning the movement, the robot was able to skillfully trap a dropped ball without any external assistance.
sEMG sensors attached to a man’s arm, allowed him to control the location and flexibility of a robot’s rapid movements. Source: KIST
This research outcome, which shows that robots can be intuitively taught to be flexible by humans, has attracted much attention, as it was not accomplished through numerical calculation or programming of the robot’s movements. This study is expected to help advance the study of interactions between humans and robots, bringing us one step closer to a world in which robots are an integral part of our daily lives.
Kim said, “The outcome of this research, which focuses on teaching human skills to robots, is an important achievement in the study of interactions between humans and robots.”
Fleets of autonomous mobile robots have been growing in warehouses and the service industry. Singapore-based Techmetics has entered the U.S. market with ambitions to supply multiple markets, which it already does overseas.
The company last month launched two new lines of autonomous mobile robots. The Techi Butler is designed to serve hotel guests or hospital patients by interacting with them via a touchscreen or smartphone. It can deliver packages, room-service orders, and linens and towels.
The Techi Cart is intended to serve back-of-house services such as laundry rooms, kitchens, and housekeeping departments.
“Techmetics serves 10 different applications, including manufacturing, casinos, and small and midsize businesses,” said Mathan Muthupillai, founder and CEO of Techmetics. “We’re starting with just two in the U.S. — hospitality and healthcare.”
Building a base
Muthupillai founded Techmetics in Singapore in 2012. “We spent the first three years on research and development,” he told The Robot Report. “By the end of 2014, we started sending out solutions.”
“The R&D team didn’t just start with product development,” recalled Muthupillai. “We started with finding clients first, identified their pain points and expectations, and got feedback on what they needed.”
“A lot of other companies make a robotic base, but then they have to build a payload solution,” he said. “We started with a good robot base that we found and added our body, software layer, and interfaces. We didn’t want to build autonomous navigation from scratch.”
“Now, we’re just getting components — lasers, sensors, motors — and building everything ourselves,” he explained. “The navigation and flow-management software are created in-house. We’ve created our own proprietary software.”
“We have a range of products, all of which use 2-D SLAM [simultaneous localization and mapping], autonomous navigation, and many safety sensors,” Muthupillai added. “They come with three lasers — two vertical and one horizontal for path planning. We’re working on a 3-D-based navigation solution.”
“Our robots are based on ROS [the Robot Operating System],” said Muthupillai. “We’ve created a unique solution that comes with third-party interfaces.”
Source: Techmetics
Techmetics payloads vary
The payload capacity of Techmetics’ robots depends on the application and accessories and ranges from 250 to 550 lb. (120 to 250 kg).
“The payload and software are based on the behavior patterns in an industry,” said Muthupillai. “In manufacturing or warehousing, people are used to working around robots, but in the service sector, there are new people all the time. The robot must respond to them — they may stay in its path or try to stop it.”
“When we started this company, there were few mobile robots for the manufacturing industry. They looked industrial and had relatively few safety features because they weren’t near people,” he said. “We changed the form factor for hospitality to be good-looking and safer.”
“When we talk with hotels about the Butler robots, they needed something that could go to multiple rooms,” Muthupillai explained. “Usually, staffers take two to three items in a single trip, so if a robot went to only one room and then returned, that would be a waste of time. Our robots have three compartment levels based on this feedback.”
Elevators posed a challenge for the Techi Butler and Techi Cart — not just for interoperability, but also for human-machine interaction, he said.
“Again, people working with robots didn’t share elevators with robots, but in hospitals and hotels, the robot needs to complete its job alongside people,” Muthupillai said. “After three years, we’re still modifying or adding functionalities, and the robots can take an elevator or go across to different buildings.”
“We’re not currently focusing on the supply chain industry, but we will license and launch the base into the market so that third parties can create their own solutions,” he said.
Techi Cart transports linens and towels in a hotel or hospital. Source: Techmetics
Differentiators for Techi Butler and Cart
“We provide 10 robot models for four industries — no single company is a competitor for all our markets,” said Muthupillai. “We have three key differentiators.”
“First, customers can engage one vendor for multiple needs, and all of our robots can interact with one another,” he said. “Second, we talk with our clients and are always open to customization — for example, about compartment size — that other’s can’t do.”
“Third, we work across industries and can share our advantages across them,” Muthupillai claimed. “Since we already work with the healthcare industry, we already comply with safety and other regulations.”
“In hospitals or hotels, it’s not just about delivering a product from one point to another,” he said. “We’re adding camera and voice-recognition capabilities. If a robot sees a person who’s lost, it can help them.”
Techmetics’ mobile robots are manufactured in Thailand. According to Muthupillai, 80% of its robots are deployed in hotels and hospitals, and 20% are in manufacturing. The company already has distributors in Australia, Taiwan, and Thailand, and it is leveraging existing international clients for its expansion.
“We have many corporate clients in Singapore,” Muthupillai said. “The Las Vegas Sands Singapore has deployed 10 robots, and their headquarters in Las Vegas is considering deploying our products.”
“Also, U.K.-based Yotel has two hotels in Singapore, and its London branch is also interested,” he added. “The Miami Yotel is already using our robots, and soon they will be in San Francisco.”
Techmetics has three models for customers to choose from. The first is outright purchase, and the second is a two- or three-year lease. “The third model is innovative — they can try the robots from three to six months or one year and then buy,” Muthupillai said.
Muthupillai said he has moved to Techmetics’ branch office in the U.S. to manage its expansion. “We’ll be doing direct marketing in California, and we’re in the process of identifying partners, especially on the East Coast.”
“Only the theme, colors, or logos changed. No special modifications were necessary for the U.S. market,” he said. “We followed safety regulations overseas, but they were tied to U.S. regulations.”
“We will target the retail industry with a robot concierge, probably by the end of this year,” said Muthupillai. “We will eventually offer all 10 models in the U.S.”