Prescribing a Robot ‘Intervention’
According to Australian Centre for Robotic Vision Research’s Nicole Robinson, research studies on the impact of social robot interventions there have been few and unsophisticated. There is good news… the results are encouraging.
As our world struggles with mental health and substance use disorders affecting 970 million people and counting (according to 2017 figures), the time is ripe for meaningful social robot ‘interventions’. That’s the call by Australian Centre for Robotic Vision Research Fellow Nicole Robinson – a roboticist with expertise in psychology and health – as detailed in the Journal of Medical Internet Research (JMIR).
Having led Australia’s first study into the positive impact of social robot interventions on eating habits (in 2017), Robinson and the Centre’s social robotics team believes it is time to focus on weightier health and wellbeing issues, including depression, drug and alcohol abuse, and eating disorders.
Global Trials To Date
In the recently published JMIR paper, A Systematic Review of Randomised Controlled Trials on Psychosocial Health Interventions by Social Robots, Robinson reveals global trials to date are ‘very few and unsophisticated’. Only 27 global trials met inclusion criteria for psychosocial health interventions; many of them lacked a follow-up period; targeted small sample groups (<100 participants); and limited research to contexts of child health, autism spectrum disorder (ASD) and older adults.
Of concern, no randomised controlled trials have yet involved adolescents or young adults at a time when the World Health Organisation (WHO) estimates one in six adolescents (aged 10-19) are affected by mental health disorders. According to the agency, half of all mental health conditions start by 14 years of age, but most cases are undetected and untreated.
WHO warns: “The consequences of not addressing adolescent mental health conditions extend to adulthood, impairing both physical and mental health and limiting opportunities to lead fulfilling lives…”
In good news for the Centre’s research into social robot interventions, WHO pushes for the adoption of multi-level and varied prevention and promotion programs including via digital platforms (Read more HERE).
A Therapeutic Alliance
Despite limited amount of global research conducted on psychosocial health interventions by social robots, Robinson believes the results are nevertheless encouraging. They indicate a ‘therapeutic alliance’ between robots and humans could lead to positive effects similar to the use of digital interventions for managing anxiety, depression and alcohol use.
“The beauty of social robot intervention is that they could help to side-step potential negative effects of face-to-face therapy with a human health practitioner such as perceived judgement or stigma,” said Robinson, who has used Nao and SoftBank’s Pepper robots in her research at the Centre.
“Robots can help support a self-guided program or health service by interacting with people to help keep them on track with their health goals.
“Our research is not about replacing healthcare professionals, but identifying treatment gaps where social robots can effectively assist by engaging patients to discuss sensitive topics and identify problems that may require the attention of a health practitioner.”
In the JMIR paper, published last month, Robinson puts out a timely global call for research on social robot interventions to transition from exploratory investigations to large-scale controlled trials with sophisticated methodology.
At the Australian Centre for Robotic Vision’s QUT headquarters, she’s helping to lay the groundwork. The Centre’s research, sponsored by the Queensland Government, is assessing the capabilities of social robots and using SoftBank Robotics’ Pepper robot to explore applications where social robots can deliver value beyond their novelty appeal.
Social Robot Trials
In 2018, the Centre’s social robotics team initiated a set of trials involving Pepper robots to measure the unique value of social robots in one-to-one interactions in healthcare. After supporting an Australia-first trial of a Pepper robot at Townsville Hospital and Health Service, the Centre’s team has placed Pepper into a QUT Health Clinic at Kelvin Grove Campus.
The three-month study to June 2019 involves Pepper delivering a brief health assessment and providing customised feedback that can be taken to a health practitioner to discuss issues around physical activity, dietary intake, alcohol use and smoking. Members of the public who are registered as patients at the QUT Health Clinic are invited to take part in this trial.
In a separate online trial, the Centre’s social robotics team is assessing people’s attitudes to social robots and their willingness to engage with and discuss different topics with a robot or human as the conversation partner.
For more information on the Australian Centre for Robotic Vision’s work creating robots able to see and understand like humans, download our 2018 Annual Report.
Editor’s Note: This article was republished with permission from The Australian Centre for Robotic Vision. The original article can be found HERE.
The post Prescribing a Robot ‘Intervention’ appeared first on The Robot Report.
Self-driving cars may not be best for older drivers, says Newcastle University study
With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.
But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.
“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.
“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”
Potential game-changer
Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.
But it’s not as clear-cut as it seems, said Li.
“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner. This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”
“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”
Newcastle University DriveLAB tests older drivers
Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.
Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).
They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.
The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.
The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).
“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.
“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.
In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.
In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.
Designing automated cars of the future
The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.
Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.
The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.
“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”
“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”
“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe
Case studies of older drivers
Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.
Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.
“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”
“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.
“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”
Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”
“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”
Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.
“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical. I think perhaps you can have too many buttons.”
Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.
The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.
4 Overheating solutions for commercial robotics
Overheating can become a severe problem for robots. Excessive temperatures can damage internal systems or, in the most extreme cases, cause fires. Commercial robots that regularly get too hot can also cost precious time, as operators are forced to shut down and restart the machines during a given shift.
Fortunately, robotics designers have several options for keeping industrial robots cool and enabling workflows to progress smoothly. Here are four examples of technologies that could keep robots at the right temperature.
1. Lithium-ion batteries that automatically shut off and restart
Many robots, especially mobile platforms for factories or warehouses, have lithium-ion battery packs. Such batteries are popular and widely available, but they’re also prone to overheating and potentially exploding.
Researchers at Stanford University engineered a battery with a special coating that stops it from conducting electricity if it gets too hot. As the heat level climbed, the layer expanded, causing a functional change that made the battery itself no longer conducive. However, once cool, it starts providing power as usual.
The research team did not specifically test their battery coating in robots powered by lithium-ion batteries. However, it noted that the work has practical merit for a variety of use cases due to how it’s possible to change the heat level that causes the battery to shut down.
For example, if a robot has extremely sensitive internal parts, users would likely want it to shut down at a lower temperature than when using it in a more tolerant machine.
2. Sensors that measure a robot’s ‘health’ to avoid overheating
Commercial robots often allow corporations to achieve higher, more consistent performance levels than would be possible with human effort alone. Industrial-grade robots don’t need rest breaks, but unlike humans who might speak up if they feel unwell and can’t complete a shift, robots can’t necessarily notify operators that something’s wrong.
However, University of Saarland researchers have devised a method that subjects industrial machines to the equivalent of a continuous medical checkup. Similar to how consumer health trackers measure things like a person’s heart rate and activity levels and give them opportunities to share these metrics with a physician, a team aims to do the same with industrial machinery.
It should be possible to see numerous warning signs before a robot gets too hot. The scientists explained that they use special sensors that fit inside the machines and can interact with one another as well as a robot’s existing process sensors. The sensors collect baseline data. They can also recognize patterns that could indicate a failing part — such as that the machine gets hot after only a few minutes of operating.
That means the sensors could warn plant operators of immediate issues, like when a robot requires an emergency shutdown because of overheating. It could also help managers understand if certain processes make the robots more likely to overheat than others. Thanks to the constant data these sensors provide, human workers overseeing the robots should have the knowledge they need to intervene before a catastrophe occurs.
Manufacturers already use predictive analytics to determine when to perform maintenance. This approach could provide even more benefits because it goes beyond maintenance alerts and warns if robots stray from their usual operating conditions due because of overheating or other reasons that need further investigation.
3. Thermally conductive rubber
When engineers design robots or work in the power electronics sector, heat dissipation technologies are almost always among the things to consider before the product becomes functional. For example, even in a device that’s 95% efficient, the remaining 5% gets converted into heat that needs to escape.
Pumped liquid, extruded heatsinks, and vapor chambers are some of the available methods for keeping power electronics cool. Returning to commercial robotics specifically, Carnegie Mellon University scientists have developed a material that aids in heat management for soft robots. They said their creation — nicknamed “thubber” — combines elasticity with high heat conductivity.
The material stretches to more than six times its initial length, and that’s impressive in itself. However, the CMIU researchers also mentioned that the blend of high heat conductivity and the flexibility of the material are crucial for facilitating dissipation. They pointed out that past technologies required attaching high-powered devices to inflexible mounts, but they now envision creating these from the thubber.
Then, the respective devices, whether bendable robots or folding electronics, could be more versatile and stay cool as they function.
4. Liquid cooling and fan systems
Many of the cooling technologies used in industrial robots happen internally, so users don’t see them working, but they know everything is functioning as it should since the machine stays at a desirable temperature. Plus, there are some robots for which heat reduction is exceptionally important due to the tasks they assume. Firefighting robots are prime examples.
One of them, called Colossus, recently helped put out the Notre Dame fire in Paris. It has an onboard smoke ventilation system that likely has a heat-management component, too. Purchasers can also pay more to get a smoke-extracting fan. It’s an example of a mobile robot that uses lithium-ion batteries, making it a potential candidate for the first technology on the list.
There’s another firefighting robot called the Thermite, and it uses both water and fans to stay cool. For example, the robot can pump out 500 gallons of water per minute to control a blaze, but a portion of that liquid goes through the machine’s internal “veins” first to keep it from overheating.
In addition, part of Thermite converts into a sprinkler system, and onboard fans help recycle the associated mist and cool the machine’s components.
An array of overheating options
Robots are increasingly tackling jobs that are too dangerous for humans. As these examples show, they’re up to the task as long as the engineers working to develop those robots remain aware of internal cooling needs during the design phase.
This list shows that engineers aren’t afraid to pursue creative solutions as they look for ways to avoid overheating. Although many of the technologies described here are not yet available for people to purchase, it’s worthwhile for developers to stay abreast of the ongoing work. The attempts seem promising, and even cooling efforts that aren’t ready for mainstream use could lead to overall progress.
The post 4 Overheating solutions for commercial robotics appeared first on The Robot Report.
Vegebot robot applies machine learning to harvest lettuce
Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.
A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.
Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.
Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.
“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”
“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author
“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.
The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”
“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.
Vegebot designed for lettuce-picking challenge
The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.
The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.
A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.
“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.
In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.
However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.
“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”
Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.
The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.
Team programs a humanoid robot to communicate in sign language
Cowen, MassRobotics collaborating on robotics & AI research
Cowen Inc. and MassRobotics today announced a collaboration to bring together their extensive market knowledge to advance research into the emerging robotics and artificial intelligence industry. Based in the Boston area, MassRobotics is a global hub for robotics, and the collective work of a group of engineers, rocket scientists, and entrepreneurs focused on the needs of the robotics community.
MassRobotics is the strategic partner of the Robotics Summit & Expo, which is produced by The Robot Report.
“The robotics and artificial intelligence industry is a rapidly expanding market, and one that will define the advancement of manufacturing and services on a global basis. We are thrilled to be partnering with such an innovative collective in MassRobotics, which was established through a shared vision of advancing the robotics industry,” said Jeffrey M. Solomon, Chief Executive Officer of Cowen. “Cowen has dedicated substantial time into the research of robotics and AI and we look forward to sharing our knowledge and capital markets expertise to support the emerging growth companies associated with MassRobotics.”
Related: MassRobotics, SICK partner to assist robotics startups
Fady Saad, Co-founder and Director of Partnerships of MassRobotics, added, “Cowen has a proven track record of delivering in-depth research across sectors, which allows them to understand the dynamic flow of the markets and provide capital to support emerging companies. Collectively we bring together the best of market research and industry knowledge in an effort to advance robotics and provide companies with opportunities for growth.”
About Cowen Inc.
Cowen Inc. is a diversified financial services firm that operates through two business segments: a broker dealer and an investment management division. The Company’s broker dealer division offers investment banking services, equity and credit research, sales and trading, prime brokerage, global clearing and commission management services. Cowen’s investment management segment offers actively managed alternative investment products. Cowen Inc. focuses on delivering value-added capabilities to our clients in order to help them outperform. Founded in 1918, the firm is headquartered in New York and has offices worldwide. Learn more at Cowen.com
About MassRobotics
MassRobotics is the collective work of a group of Boston-area engineers, rocket scientists, and entrepreneurs. With a shared vision to create an innovation hub and startup cluster focused on the needs of the robotics community, MassRobotics was born. MassRobotics’ mission is to help create and scale the next generation of successful robotics and connected device companies by providing entrepreneurs and innovative robotics/automation startups with the workspace and resources they need to develop, prototype, test, and commercialize their products and solutions.
The post Cowen, MassRobotics collaborating on robotics & AI research appeared first on The Robot Report.
Top 10 robotics stories during 1st half of 2019
We’re more than halfway through 2019, and there’s been a lot to talk about. Here are The Robot Report‘s picks for the top 10 robotics stories during the first half of 2019. Please share your thoughts below via the survey or the comments section.
1. Consumer robotics company Anki shuts down
The struggles of consumer robotics companies are well documented – see Jibo, Keecker, Laundroid, Mayfield Robotics – but it still came as a major blow to the industry when Anki shut down on April 29.
Anki raised more than $200 million since it was founded in 2010 and claimed it had revenue of nearly $100 million in revenue in 2017. And according to Anki Co-Founder and CEO Boris Sofman, who was hired by Waymo to lead its autonomous trucking efforts, the company “shipped over 3.5 million devices and robots around the world.”
Anki’s intellectual property is controlled by Silicon Valley Bank, which has had a security interest in Anki’s copyrights, patents and trademarks since March 30, 2018. Sources told The Robot Report that Anki already had a prototype of its next consumer robot. Anki also had a strategic partnership in place that “fell through at the last minute,” according to a former Anki employee.
2. Boston Dynamics enters logistics market
Another major surprise occurred April 2 when Boston Dynamics acquired Kinema Systems, a Menlo Park, Calif.-based startup that uses vision sensors and deep-learning software to help robots manipulate boxes. Essentially, this was Boston Dynamics’ entrance into the logistics market.
This is another sign of Boston Dynamics being more application-concious since it was acquired by SoftBank in mid-2017. The development of Handle and SpotMini, and the Kinema acquisition, point directly to that.
“I think Google planted the seed,” said Marc Raibert, CEO and Founder of Boston Dynamics. “And all of the other robotics companies near us were much more focused on applications and product than we were. So we’ve been turning that corner. It’s been a consistent thing. It’s not like we got to SoftBank and they hit us with a hammer and suddenly said, ‘make products.’ They’ve been extremely enthusiastic about our R&D work, too. It feels good to do both.”
3. Robust AI wants to give robots common sense
Giving robots the ability to think with common sense is a lofty goal, but an all-star team at Robust AI is trying to do just that. The Palo Alto, Calif.-based startup was announced by co-founder Henrik Christensen during his keynote at the Robotics Summit & Expo, produced by The Robot Report. The company has office space at Playground Global, its main investor, for the next 12 months.
Robust AI is trying to build an industrial-grade cognitive platform for robots. The company’s argument is that deep learning alone is enough to move the needle. To build its cognitive platform, Robust AI will take a hybrid approach by combining multiple techniques, including deep learning and symbolic AI, which was the dominant paradigm of AI research from the mid-1950s until the late 1980s.
4. Amazon launches new logistics robots
Kiva Systems, now known as Amazon Robotics after it was acquired by Amazon for $775 million in 2012, essentially created the mobile logistics robotics market we know today. The so-called Amazon effect prompted other startups to develop and offer automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) to retailers and third-party logistics (3PL) companies.
It’s major news when Amazon makes a move in this space, and Amazon has made several in 2019. On April 11, Amazon acquired Boulder, Colo.-based Canvas Technology for an unspecified amount. Canvas uses “spatial AI” to enable mobile robots to navigate safely around people in dynamic environments. It claimed that its combination of sensors and simultaneous localization and mapping (SLAM) software can enable AMRs to operate without relying on a prior map. The robots can continuously update a shared map, according to the company.
Amazon also developed new warehouse robots designed to accelerate automation in its fulfillment centers. Amazon said the new robots represent a major redesign of the Kiva Systems robots. Amazon warehouses already have 800 units of one of the new robots, Pegasus, up and running.
5. ROS for Windows 10 official
Microsoft introduced last fall an experimental release of the Robot Operating System (ROS) for Windows 10. During its 2019 Build conference in Seattle, Microsoft announced ROS is now generally available on Windows 10 IoT Enterprise.
ROS is an open-source platform that provides robotics developers with a variety of libraries and tools to build robots. ROS for Windows 10 is an opportunity for Microsoft to expose its Azure cloud platform, and associated products, to ROS developers around the world.
6. iRobot introduces Terra t7 robot lawn mower
An iRobot robotic lawn mower was one of the worst-kept secrets in robotics. In January 2019, the iRobot Terra t7 robot lawn mower was finally unveiled. The Terra t7 robot lawn mower will be available for sale in Germany and as a beta program in the US in 2019.
Specs and pricing aren’t known at this point, but iRobot says ease of use is the main differentiator. Instead of burying and running boundary wires, users need to place wireless beacons around their yards and manually drive the Terra t7 robot lawn mower around to teach it the layout. The beacons need to remain in place throughout the mowing season. Terra uses the beacons to calculate its position in the yard. The robot will operate autonomously after the initial training run.
7. Big tech companies working on development tools
Add Facebook and Microsoft to the list of major technology companies working on robotics development tools. Facebook in late June open-sourced its PyRobot framework for robotics research and benchmarking. PyRobot, which Facebook developed with Carnegie Mellon University, is designed to allow AI researchers and students to get robots working in just a few hours without specialized knowledge of device drivers, controls, or planning.
On top of its ROS work, Microsoft is building an end-to-end toolchain that makes it easier for developers to create autonomous systems. The platform uses Microsoft AI, Azure tools and simulation technologies, such as Microsoft’s AirSim or industry simulators, that allow machines to learn in safe, realistic environments. The platform also uses what Microsoft is calling “machine teaching,” which relies on a developer’s or subject matter expert’s knowledge to break a large problem into smaller chunks.
In November 2018, Amazon Web Services released its RoboMaker cloud robotics platform to give developers a centralized environment to build, test, and deploy robots with the cloud. Google also has a cloud robotics platform that was announced last year.
8. Aria Insights shuts down
Drone maker Aria Insights abruptly shut down on March 21. Formerly known as CyPhy Works, the company was primarily known for its Persistent Aerial Reconnaissance and Communications (PARC) platform, a tethered drone that provided secure communication and continuous flight to customers.
CyPhy Works rebranded as Aria Insights in January 2019 to focus more on using artificial intelligence and machine learning to help analyze data collected by drones. But it was too little too late.
CyPhy Works was founded in 2008 by Helen Greiner, who also co-founded iRobot in 1990. Greiner left CyPhy Works in 2017 and in June 2018 was named an advisor to the US Army for robotics, autonomous systems and AI.
Robotics Investments for First 6 Months of 2019
Month | Investment Amount |
---|---|
January | $644M |
February | $4.3B |
March | $1.3B |
April | $6.5B |
May | $1.5B |
June | $1.4B |
Yearly Total | $15.64B |
9. Robotics investments
Investments into robotics companies have totaled more than $15.64 billion in the first half of 2019. Some of the leading markets investment-wise include healthcare robotics, logistics and manufacturing. But autonomous vehicles take the cake thus far. In June, for example, autonomous vehicles accounted for $717 million of the $1.4 billion that was invested into robotics companies.
Check out the table above for a month-by-month breakdown of robotics investments and follow our Investments Section for the latest news and analysis.
10. Johnson & Johnson acquired Auris Health
Johnson & Johnson (J&J) subsidiary Ethicon acquired Auris Health and its FDA-cleared Monarch platform for $3.4 billion. Auris is surgical robotics pioneer Dr. Fred Moll’s newest robotic surgical play. The acquisition is one of the 10 largest VC-backed, private M&A transactions of all-time and will be both the largest robotics and largest medtech private M&A deal in history. Kiva Systems previously held the title for largest robotics acquisition when it was purchased by Amazon for $775 million.
Auris’ robotic Monarch platform has FDA clearance for diagnostic and therapeutic bronchoscopic procedures. The system features a controller interface for navigating the integrated flexible robotic endoscope into the periphery of the lung and combines traditional endoscopic views with computer-assisted navigation based on 3D patient models. Auris said J&J’s global distribution will broaden access to the Monarch Platform.
The post Top 10 robotics stories during 1st half of 2019 appeared first on The Robot Report.
20 largest robotics investments during 1st half of 2019
Robotics companies raised more than $15.6 billion during the first half of 2019. According to the robotics investments tracked and verified by The Robot Report, more than $2.6 billion was raised on average per month. The year started slowly with $644 million raised in January, but there was at least $1.3 billion raised each month thereafter.
For The Robot Report‘s investment analysis, autonomous vehicles, including technologies that support autonomous driving, and drones are considered robots. On the other hand, 3D printers, CNC systems, and various types of “hard” automation are not.
Robotics Investments for First 6 Months of 2019
Month | Investment Amount |
---|---|
January | $644M |
February | $4.3B |
March | $1.3B |
April | $6.5B |
May | $1.5B |
June | $1.4B |
Yearly Total | $15.64B |
As you can see in the table below, autonomous vehicle investments made up a significant percentage of overall funding. Ten of the top 20 robotics investments tracked by The Robot Report belonged to companies producing autonomous vehicles or autonomous vehicle enabling technologies. Autonomous vehicle companies raised 55% ($4.6 billion) of the total $8.2 billion raised in the 20 investments. The top three autonomous vehicle investments belonged to Cruise ($1.15 billion), Uber ($1 billion) and Nuro ($940 million), which raised a combined $3.1 billion.
Healthcare robotics companies have also fared well in 2019. Intuitive Surgical raised $2 billion via a stock repurchase in February, while Think Surgical and Ekso Bionics raised $134 million and $100 million, respectively. HistoSonics raised $54 million in April for its medical robotics platform that destroy cancerous tumors without affecting surrounding tissue.
The Robot Report will have a detailed breakdown of investments by sector in a follow-up article.
To stay updated about the latest robotics investments and acquisitions, check out The Robot Report‘s Investment Section.
20 Largest Robotics Investments During 1st Half of 2019
Company | Funding (M$) | Lead Investor | Date | Technology |
---|---|---|---|---|
Intuitive Surgical | 2000 | Stock Repurchase | 2/1/19 | Surgical Robots |
Cruise | 1150 | Honda Motor Corp. | 5/7/19 | Autonomous Vehicles |
Uber ATG | 1000 | SoftBank Vision Fund | 4/18/19 | Autonomous Vehicles |
Nuro.ai | 940 | SoftBank Vision Fund | 2/11/19 | Autonomous Vehicles |
Horizon Robotics | 600 | SK China | 2/27/19 | AI/IOT |
Aurora Innovation | 600 | Amazon | 2/7/19 | Autonomous Vehicles |
Weltmeister Motor | 450 | Baidu Inc. | 3/11/19 | Autonomous Vehicles |
Cloudminds | 300 | SoftBank Vision Fund | 3/26/19 | Service Robots |
Zipline | 190 | TPG | 5/17/19 | Drone Delivery |
Innoviz Technologies | 170 | China Merchants Capital | 3/26/19 | LiDAR |
Think Surgical | 134 | 3/11/19 | Surgical Robots | |
Beijing Auto AI Technology | 104 | Robert Bosch Venture Capital | 1/24/19 | AI |
Black Sesame Technologies | 100 | Legend Capital | 4/15/2019 | Machine Learning |
Ekso Bionics Holdings | 100 | Zhejiang Youchuang Venture Capital Investment Co. | 1/30/19 | Exoskeletons |
TUSimple | 95 | Sina Corp | 2/13/19 | Autonomous Vehicles |
Ouster | 60 | Runway Growth Capital | 3/25/19 | LiDAR |
NASN Automotive | 59.6 | Matrix Partners China | 1/30/19 | Autonomous Vehicles |
HistoSonics | 54 | Varian Medical | 4/8/19 | Medical Robots |
Ike | 52 | Bain Capital Ventures | 2/5/19 | Autonomous Vehicles |
Enflame | 43.4 | Redpoint China Ventures | 6/6/2019 | AI Chipmaker |
Editors note: What defines robotics investments? The answer to this simple question is central in any attempt to quantify robotics investments with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.
Investors and investing
Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.
Robotics and intelligent systems companies
Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, think, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.
Companies that are “robotic” in name only, or use the term “robot” to describe products and services that that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.
Verification
Funding information is collected from a number of public and private sources. These include press releases from corporations and investment groups, corporate briefings, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded.
The post 20 largest robotics investments during 1st half of 2019 appeared first on The Robot Report.