R-Series actuator from Hebi Robotics is ready for outdoor rigors

PITTSBURGH — What do both summer vacationers and field robots need to do? Get into the water. Hebi Robotics this week announced the availability of its R-Series actuators, which it said can enable engineers “to quickly create custom robots that can be deployed directly in wet, dirty, or outdoor environments.”

Hebi Robotics was founded in 2014 by Carnegie Mellon University professor and robotics pioneer Howie Choset. It makes hardware and software for developers to build robots for their specific applications. It also offers custom development services to make robots “simple, useful, and safe.”

Hebi’s team includes experts in robotics, particularly in motion control. The company has developed robotics tools for academic, aerospace military, sewer inspection, and spaceflight users.

Robots can get wet and dirty with R-Series actuators

The R-Series actuator is built on Hebi’s X-Series platform. It is sealed to IP678 and is designed to be lightweight, compact, and energy-efficient. The series includes three models, the R8-3, which has continuous torque of 3 N-m and weighs 670g; the RB-9, which has continuous torque of 8 N-m and weighs 685g; and the R8-16, which has continuous torque of 16 N-m and weighs 715g.

Hebi's R-Series actuator

The R-Series actuator is sealed for wet and dirty environments. Source: Hebi Robotics

The actuators also include sensors that Hebi said “enable simultaneous control of position, velocity, and torque, as well as three-axis inertial measurement.”

In addition, the R-Series integrates a brushless motor, gear reduction, force sensing, encoders, and controls in a compact package, said Hebi. The actuators can run on 24-48V DC, include internal pressure sensors, and communicate via 100Mbps Ethernet.

On the software side, the R-Series has application programming interfaces (APIs) for MATLAB, the Robot Operating System (ROS), Python, C and C++, and C#, as well as support for Windows, Linux, and OS X.

According to Hebi Robotics, the R-Series actuators will be available this autumn, and it is accepting pre-orders at 10% off the list prices. The actuator costs $4,500, and kits range from $20,000 to $36,170, depending on the number of degrees of freedom of the robotic arm. Customers should inquire about pricing for the hexapod kit.

The post R-Series actuator from Hebi Robotics is ready for outdoor rigors appeared first on The Robot Report.

Cassie bipedal robot a platform for tackling locomotion challenges

Working in the Dynamic Autonomy and Intelligent Robotics lab at the University of Pennsylvania, Michael Posa (right) and graduate student Yu-Ming Chen use Cassie to help develop better algorithms that can help robots move more like people. | Credit: Eric Sucar

What has two legs, no torso, and hangs out in the basement of the University of Pennsylvania’s Towne Building?

It’s Cassie, a dynamic bipedal robot, a recent addition to Michael Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab. Built by Agility Robotics, a company in Albany, Oregon, Cassie offers Posa and his students the chance to create and test the locomotion algorithms they’re developing on a piece of equipment that’s just as cutting-edge as their ideas.

“We’re really excited to have it. It offers us capabilities that are really unlike anything else on the commercial market,” says Posa, a mechanical engineer in the School of Engineering and Applied Science. “There aren’t many options that exist, and this means that every single lab that wants to do walking research doesn’t have to spend three years building its own robot.”

Having Cassie lets Posa’s lab members spend all their time working to solve the huge challenge of designing algorithms so that robots can walk and navigate across all kinds of terrain and circumstances.

“What we have is a system really designed for dynamic locomotion,” he says. “We get very natural speed in terms of leg motions, like picking up a foot and putting it down somewhere else. For us, it’s a really great system.”

“It offers us capabilities that are really unlike anything else on the commercial market,” Posa says about Cassie. | Credit: Eric Sucar

Why do the legs matter? Because they dramatically expand the possibilities of what a robot can do. “You can imagine how legged robots have a key advantage over wheeled robots in that they are able to go into unstructured environments. They can go over relatively rough terrain, into houses, up a flight of stairs. That’s where a legged robot excels,” Posa says. “This is useful in all kinds of applications, including basic exploration, but also things like disaster recovery and inspection tasks. That’s what’s drawing a lot of industry attention these days.”

Of course, walking over different terrain or up a curb, step, or other incline dramatically increases what a robot has to do to stay upright. Consider what happens when you walk: Bump into something with your elbow, and your body has to reverse itself to avoid knocking it over, as well as stabilize itself to avoid falling in the opposite direction.

Related: Ford package delivery tests combine autonomous vehicles, bipedal robots

A robot has to be told to do all of that – which is where Posa’s algorithms come in, starting from where Cassie’s feet go down as it takes each step.

“Even with just legs, you have to make all these decisions about where you’re going to put your feet,” he says. “It’s one of those decisions that’s really very difficult to handle because everything depends on where and when you’re going to put your feet down and putting that foot down crates an impact: You shift your weight, which changes your balance, and so on.



“This is a discrete event that happens quickly. From a computational standpoint, that’s one of the things we really struggle with—how do we handle these contact events?”

Then there’s the issue of how to model what you want to tell the robot to do. Simple modeling considers the robot as a point moving in space rather than, for example, a machine with six joints in its leg. But of course, the robot isn’t a point, and working with those models means sacrificing capability. Posa’s lab is trying to build more sophisticated models that, in turn, make the robot move more smoothly.

“We’re interested in the sort of middle ground, this Goldilocks regime between ‘this robot has 12 different motors’ and ‘this robot is a point in space,'” he says.

Related: 2019 the Year of Legged Robots

Cassie’s predecessor was called ATRIAS, an acronym for “assume the robot is a sphere.” ATRIAS allowed for more sophisticated models and more ability to command the robot, but was still too simple, Posa says. “The real robot is always different than a point or sphere. The question is where should our models live on this spectrum, from very simple to very complicated?”

Two graduate students in the DAIR Lab have been working on the algorithms, testing them in simulation and then, finally, on Cassie. Most of the work is virtual, since Cassie is really for testing the pieces that pass the simulation test.

“You write the code there,” says Posa, gesturing at a computer across the lab, “and then you flip a switch and you’re running it with the real robot. In general, if it doesn’t work in the simulator, it’s not going to work in the real world.”

Graduate students, including Chen (left), work on designing new algorithms and running computer simulations before testing them on Cassie. | Credit: Eric Sucar

On the computer, the researchers can take more risks, says graduate student Yu-Ming Chen. “We don’t break the robot in simulation,” he says, chuckling.

So what happens when you take these legs for a spin? The basic operation involves a marching type of step, as Cassie’s metal feet clang against the floor. But even as the robot makes these simple motions, it’s easy to see how the joints and parts work together to make a realistic-looking facsimile of a legged body from the waist down.

With Cassie as a platform, Posa says he’s excited to see how his team can push locomotion research forward.

“We want to design algorithms to enable robots to interact with the world in a safe and productive fashion,” he says. “We want [the robot] to walk in a way that is efficient, energetically, so it can travel long distances, and walk in a way that’s safe for both the robot and the environment.”

Editor’s Note: This article was republished from the University of Pennsylvania.

MIT ‘walking motor’ could help robots assemble complex structures


Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

New approach to building robots

Their work offers an alternative to today’s approaches to constructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Robots working in confined spaces

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”


Editor’s Note: This article was republished from MIT News.


The post MIT ‘walking motor’ could help robots assemble complex structures appeared first on The Robot Report.

Waymo self-driving cars OK’d to carry passengers in California


waymo

Waymo’s self-driving cars can now carry passengers in California. | Credit: Waymo

Waymo has been testing its self-driving cars in California for years. Now Alphabet’s self-driving car division has been granted a permit to carry passengers in the Golden State. Waymo is now part of California’s Autonomous Vehicle Passenger Service pilot program, joining Autox Technologies, Pony.ai and Zoox.

The permit, which was granted by the California Public Utilities Commission (CPUC) requires a Waymo safety operator to be behind the wheel at all times and doesn’t allow Waymo to charge riders. The permit is good for three years.

“The CPUC allows us to participate in their pilot program, giving Waymo employees the ability to hail our vehicles and bring guests on rides within our South Bay territory,” Waymo said in a statement. “This is the next step in our path to eventually expand and offer more Californians opportunities to access our self-driving technology, just as we have gradually done with Waymo One in Metro Phoenix.”

Waymo also received an exemption from the CPUC that allows it to use a third-party company to contract out safety operators. Waymo said all safety operators go through a proprietary driver training program. In a letter requesting the exemption, Waymo said that while its “team of test drivers will include some full-time Waymo employees, operating and scaling a meaningful pilot requires a large group of drivers who are more efficiently engaged through Waymo’s experienced and specialized third-party staffing providers.”

Waymo self-driving taxi service coming to California?

Of course, this permit opens the door for Waymo to eventually offer an autonomous taxi service in California. But a Waymo spokesperson said there was no timetable for rolling out a self-driving taxi-like service in California. For now, the Waymo service will be limited to its employees and their guests in the Silicon Valley area.

Waymo One, a commercial self-driving service, launched in December 2018 in Phoenix, Ariz. It has been offering rides to more than 400 volunteer testers. Waymo recently announced a partnership with Lyft. It will deploy 10 autonomous vehicles in the coming months that would be available through the Lyft app. There will be safety drivers behind the wheel in this partnership, too.

Calif. Autonomous Vehicle Disengagements 2018

CompanyDisengagements per 1000 miles (2018)Miles per Disengagement (2018)Miles Driven (2018)Miles per disengagement (2017)
Waymo0.0911,0171,271,5875,595.95
GM Cruise0.195,204.9447,6211,254.06
Zoox0.521,922.830,764282.96
Nuro0.971,028.324,680--
Pony.ai0.981,022.316,356--
Nissan4.75210.55,473208.36
Baidu4.86205.618,09341.06
AIMotive4.96201.63,428--
AutoX5.24190.822,710--
Roadstar.AI5.70175.37,539--
WeRide/JingChi5.71173.515,440.80--
Aurora10.0199.932,858--
Drive.ai11.91 83.94,616.6943.59
PlusAI18.4054.410,816--
Nullmax22.4044.63,036--
Phantom AI48.2020.74,149--
NVIDIA49.7320.14,1424.63
SF Motors90.56112,561--
Telenav166.676.03032
BMW219.514.641--
CarOne/Udelv260.27 3.8219--
Toyota393.702.5381--
Qualcomm416.632.4240.02--
Honda458.332.2168--
Mercedes Benz682.521.51,749.391.29
SAIC829.611.2634.03--
Apple871.651.179,745--
Uber2608.460.426,899--

Waymo’s track record in California

According to the California Department of Motor Vehicles (DMV), Waymo had the best-performing autonomous vehicles in the state for the second consecutive year. Some have said the DMV’s tracking method is too vague and has allowed companies to avoid reporting certain events.

Nonetheless, Waymo’s self-driving cars experienced one disengagement every 11,017 miles. That performance marks a 50 percent reduction in the rate and a 96 percent increase in the average miles traveled between disengagements compared to the 2017 numbers. In 2016, Waymo had one disengagement every 5,128 miles. Waymo also drove significantly more miles, up from 352,000 miles in 2017 to 1.2 million miles in 2018, which makes the performance even more impressive.

Waymo is also working on autonomous trucks. Waymo has hired 13 former employees from Anki, the once-popular consumer robotics company that closed down. Anki Co-Founder and CEO Boris Sofman was hired as Director of Engineering, Head of Trucking, Waymo.

The post Waymo self-driving cars OK’d to carry passengers in California appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

4 Overheating solutions for commercial robotics

4 Overheating solutions for commercial robotics

Stanford University researchers have developed a lithium-ion battery that shuts down before overheating. Source: Stanford University

Overheating can become a severe problem for robots. Excessive temperatures can damage internal systems or, in the most extreme cases, cause fires. Commercial robots that regularly get too hot can also cost precious time, as operators are forced to shut down and restart the machines during a given shift.

Fortunately, robotics designers have several options for keeping industrial robots cool and enabling workflows to progress smoothly. Here are four examples of technologies that could keep robots at the right temperature.

1. Lithium-ion batteries that automatically shut off and restart

Many robots, especially mobile platforms for factories or warehouses, have lithium-ion battery packs. Such batteries are popular and widely available, but they’re also prone to overheating and potentially exploding.

Researchers at Stanford University engineered a battery with a special coating that stops it from conducting electricity if it gets too hot. As the heat level climbed, the layer expanded, causing a functional change that made the battery itself no longer conducive. However, once cool, it starts providing power as usual.

The research team did not specifically test their battery coating in robots powered by lithium-ion batteries. However, it noted that the work has practical merit for a variety of use cases due to how it’s possible to change the heat level that causes the battery to shut down.

For example, if a robot has extremely sensitive internal parts, users would likely want it to shut down at a lower temperature than when using it in a more tolerant machine.

2. Sensors that measure a robot’s ‘health’ to avoid overheating

Commercial robots often allow corporations to achieve higher, more consistent performance levels than would be possible with human effort alone. Industrial-grade robots don’t need rest breaks, but unlike humans who might speak up if they feel unwell and can’t complete a shift, robots can’t necessarily notify operators that something’s wrong.

However, University of Saarland researchers have devised a method that subjects industrial machines to the equivalent of a continuous medical checkup. Similar to how consumer health trackers measure things like a person’s heart rate and activity levels and give them opportunities to share these metrics with a physician, a team aims to do the same with industrial machinery.

Continual robot monitoring

A research team at Saarland University has developed an early warning system for industrial assembly, handling, and packaging processes. Research assistants Nikolai Helwig (left) and Tizian Schneider test the smart condition monitoring system on an electromechanical cylinder. Credit: Oliver Dietze, Saarland University

It should be possible to see numerous warning signs before a robot gets too hot. The scientists explained that they use special sensors that fit inside the machines and can interact with one another as well as a robot’s existing process sensors. The sensors collect baseline data. They can also recognize patterns that could indicate a failing part — such as that the machine gets hot after only a few minutes of operating.

That means the sensors could warn plant operators of immediate issues, like when a robot requires an emergency shutdown because of overheating. It could also help managers understand if certain processes make the robots more likely to overheat than others. Thanks to the constant data these sensors provide, human workers overseeing the robots should have the knowledge they need to intervene before a catastrophe occurs.

Manufacturers already use predictive analytics to determine when to perform maintenance. This approach could provide even more benefits because it goes beyond maintenance alerts and warns if robots stray from their usual operating conditions due because of overheating or other reasons that need further investigation.

3. Thermally conductive rubber

When engineers design robots or work in the power electronics sector, heat dissipation technologies are almost always among the things to consider before the product becomes functional. For example, even in a device that’s 95% efficient, the remaining 5% gets converted into heat that needs to escape.

Power electronics overheating roadmap

Source: Advanced Cooling Technologies

Pumped liquid, extruded heatsinks, and vapor chambers are some of the available methods for keeping power electronics cool. Returning to commercial robotics specifically, Carnegie Mellon University scientists have developed a material that aids in heat management for soft robots. They said their creation — nicknamed “thubber” — combines elasticity with high heat conductivity.

CMU thubber for overheating

A nano-CT scan of “thubber” showing the liquid-metal microdroplets inside the rubber material. Source: Carnegie Mellon University

The material stretches to more than six times its initial length, and that’s impressive in itself. However, the CMIU researchers also mentioned that the blend of high heat conductivity and the flexibility of the material are crucial for facilitating dissipation. They pointed out that past technologies required attaching high-powered devices to inflexible mounts, but they now envision creating these from the thubber.

Then, the respective devices, whether bendable robots or folding electronics, could be more versatile and stay cool as they function.

4. Liquid cooling and fan systems

Many of the cooling technologies used in industrial robots happen internally, so users don’t see them working, but they know everything is functioning as it should since the machine stays at a desirable temperature. Plus, there are some robots for which heat reduction is exceptionally important due to the tasks they assume. Firefighting robots are prime examples.

One of them, called Colossus, recently helped put out the Notre Dame fire in Paris. It has an onboard smoke ventilation system that likely has a heat-management component, too. Purchasers can also pay more to get a smoke-extracting fan. It’s an example of a mobile robot that uses lithium-ion batteries, making it a potential candidate for the first technology on the list.

There’s another firefighting robot called the Thermite, and it uses both water and fans to stay cool. For example, the robot can pump out 500 gallons of water per minute to control a blaze, but a portion of that liquid goes through the machine’s internal “veins” first to keep it from overheating.

In addition, part of Thermite converts into a sprinkler system, and onboard fans help recycle the associated mist and cool the machine’s components.

An array of overheating options

Robots are increasingly tackling jobs that are too dangerous for humans. As these examples show, they’re up to the task as long as the engineers working to develop those robots remain aware of internal cooling needs during the design phase.

This list shows that engineers aren’t afraid to pursue creative solutions as they look for ways to avoid overheating. Although many of the technologies described here are not yet available for people to purchase, it’s worthwhile for developers to stay abreast of the ongoing work. The attempts seem promising, and even cooling efforts that aren’t ready for mainstream use could lead to overall progress.

The post 4 Overheating solutions for commercial robotics appeared first on The Robot Report.

Vegebot robot applies machine learning to harvest lettuce

Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.

A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

Vegebot designed for lettuce-picking challenge

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

Vegebot in the field

Vegebot uses machine vision to identify heads of iceberg lettuce. Credit: University of Cambridge

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.

However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.

The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.

Cowen, MassRobotics collaborating on robotics & AI research


Cowen Inc. and MassRobotics today announced a collaboration to bring together their extensive market knowledge to advance research into the emerging robotics and artificial intelligence industry. Based in the Boston area, MassRobotics is a global hub for robotics, and the collective work of a group of engineers, rocket scientists, and entrepreneurs focused on the needs of the robotics community.

MassRobotics is the strategic partner of the Robotics Summit & Expo, which is produced by The Robot Report.

“The robotics and artificial intelligence industry is a rapidly expanding market, and one that will define the advancement of manufacturing and services on a global basis. We are thrilled to be partnering with such an innovative collective in MassRobotics, which was established through a shared vision of advancing the robotics industry,” said Jeffrey M. Solomon, Chief Executive Officer of Cowen. “Cowen has dedicated substantial time into the research of robotics and AI and we look forward to sharing our knowledge and capital markets expertise to support the emerging growth companies associated with MassRobotics.”

MassRoboticsRelated: MassRobotics, SICK partner to assist robotics startups

Fady Saad, Co-founder and Director of Partnerships of MassRobotics, added, “Cowen has a proven track record of delivering in-depth research across sectors, which allows them to understand the dynamic flow of the markets and provide capital to support emerging companies. Collectively we bring together the best of market research and industry knowledge in an effort to advance robotics and provide companies with opportunities for growth.”

About Cowen Inc.

Cowen Inc. is a diversified financial services firm that operates through two business segments: a broker dealer and an investment management division. The Company’s broker dealer division offers investment banking services, equity and credit research, sales and trading, prime brokerage, global clearing and commission management services. Cowen’s investment management segment offers actively managed alternative investment products. Cowen Inc. focuses on delivering value-added capabilities to our clients in order to help them outperform. Founded in 1918, the firm is headquartered in New York and has offices worldwide. Learn more at Cowen.com

About MassRobotics

MassRobotics is the collective work of a group of Boston-area engineers, rocket scientists, and entrepreneurs. With a shared vision to create an innovation hub and startup cluster focused on the needs of the robotics community, MassRobotics was born. MassRobotics’ mission is to help create and scale the next generation of successful robotics and connected device companies by providing entrepreneurs and innovative robotics/automation startups with the workspace and resources they need to develop, prototype, test, and commercialize their products and solutions.

The post Cowen, MassRobotics collaborating on robotics & AI research appeared first on The Robot Report.

Top 10 robotics stories during 1st half of 2019


We’re more than halfway through 2019, and there’s been a lot to talk about. Here are The Robot Report‘s picks for the top 10 robotics stories during the first half of 2019. Please share your thoughts below via the survey or the comments section.

Anki

Anki Cozmo robot. | Credit: Anki

1. Consumer robotics company Anki shuts down

The struggles of consumer robotics companies are well documented – see Jibo, Keecker, Laundroid, Mayfield Robotics – but it still came as a major blow to the industry when Anki shut down on April 29.

Anki raised more than $200 million since it was founded in 2010 and claimed it had revenue of nearly $100 million in revenue in 2017. And according to Anki Co-Founder and CEO Boris Sofman, who was hired by Waymo to lead its autonomous trucking efforts, the company “shipped over 3.5 million devices and robots around the world.”

Anki’s intellectual property is controlled by Silicon Valley Bank, which has had a security interest in Anki’s copyrights, patents and trademarks since March 30, 2018. Sources told The Robot Report that Anki already had a prototype of its next consumer robot. Anki also had a strategic partnership in place that “fell through at the last minute,” according to a former Anki employee.

2. Boston Dynamics enters logistics market

Another major surprise occurred April 2 when Boston Dynamics acquired Kinema Systems, a Menlo Park, Calif.-based startup that uses vision sensors and deep-learning software to help robots manipulate boxes. Essentially, this was Boston Dynamics’ entrance into the logistics market.

This is another sign of Boston Dynamics being more application-concious since it was acquired by SoftBank in mid-2017. The development of Handle and SpotMini, and the Kinema acquisition, point directly to that.

“I think Google planted the seed,” said Marc Raibert, CEO and Founder of Boston Dynamics. “And all of the other robotics companies near us were much more focused on applications and product than we were. So we’ve been turning that corner. It’s been a consistent thing. It’s not like we got to SoftBank and they hit us with a hammer and suddenly said, ‘make products.’ They’ve been extremely enthusiastic about our R&D work, too. It feels good to do both.”

Robust AI building commonsense toolbox for robots

Robust AI Co-Founders (left to right) Rodney Brooks, Mohamed Amer, Anthony Jules, Henrik Christensen and Gary Marcus at the Robust AI office in Palo Alto, Calif. | Credit: Peter Barret, Playground Global

3. Robust AI wants to give robots common sense

Giving robots the ability to think with common sense is a lofty goal, but an all-star team at Robust AI is trying to do just that. The Palo Alto, Calif.-based startup was announced by co-founder Henrik Christensen during his keynote at the Robotics Summit & Expo, produced by The Robot Report. The company has office space at Playground Global, its main investor, for the next 12 months.

Robust AI is trying to build an industrial-grade cognitive platform for robots. The company’s argument is that deep learning alone is enough to move the needle. To build its cognitive platform, Robust AI will take a hybrid approach by combining multiple techniques, including deep learning and symbolic AI, which was the dominant paradigm of AI research from the mid-1950s until the late 1980s.

4. Amazon launches new logistics robots

Kiva Systems, now known as Amazon Robotics after it was acquired by Amazon for $775 million in 2012, essentially created the mobile logistics robotics market we know today. The so-called Amazon effect prompted other startups to develop and offer automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) to retailers and third-party logistics (3PL) companies.

It’s major news when Amazon makes a move in this space, and Amazon has made several in 2019. On April 11, Amazon acquired Boulder, Colo.-based Canvas Technology for an unspecified amount. Canvas uses “spatial AI” to enable mobile robots to navigate safely around people in dynamic environments. It claimed that its combination of sensors and simultaneous localization and mapping (SLAM) software can enable AMRs to operate without relying on a prior map. The robots can continuously update a shared map, according to the company.

Amazon also developed new warehouse robots designed to accelerate automation in its fulfillment centers. Amazon said the new robots represent a major redesign of the Kiva Systems robots. Amazon warehouses already have 800 units of one of the new robots, Pegasus, up and running.


What's the biggest robotics story for the first half of 2019?



5. ROS for Windows 10 official

Microsoft introduced last fall an experimental release of the Robot Operating System (ROS) for Windows 10. During its 2019 Build conference in Seattle, Microsoft announced ROS is now generally available on Windows 10 IoT Enterprise.

ROS is an open-source platform that provides robotics developers with a variety of libraries and tools to build robots. ROS for Windows 10 is an opportunity for Microsoft to expose its Azure cloud platform, and associated products, to ROS developers around the world.

6. iRobot introduces Terra t7 robot lawn mower

An iRobot robotic lawn mower was one of the worst-kept secrets in robotics. In January 2019, the iRobot Terra t7 robot lawn mower was finally unveiled. The Terra t7 robot lawn mower will be available for sale in Germany and as a beta program in the US in 2019.

Specs and pricing aren’t known at this point, but iRobot says ease of use is the main differentiator. Instead of burying and running boundary wires, users need to place wireless beacons around their yards and manually drive the Terra t7 robot lawn mower around to teach it the layout. The beacons need to remain in place throughout the mowing season. Terra uses the beacons to calculate its position in the yard. The robot will operate autonomously after the initial training run.

7. Big tech companies working on development tools

Add Facebook and Microsoft to the list of major technology companies working on robotics development tools. Facebook in late June open-sourced its PyRobot framework for robotics research and benchmarking. PyRobot, which Facebook developed with Carnegie Mellon University, is designed to allow AI researchers and students to get robots working in just a few hours without specialized knowledge of device drivers, controls, or planning.

On top of its ROS work, Microsoft is building an end-to-end toolchain that makes it easier for developers to create autonomous systems. The platform uses Microsoft AI, Azure tools and simulation technologies, such as Microsoft’s AirSim or industry simulators, that allow machines to learn in safe, realistic environments. The platform also uses what Microsoft is calling “machine teaching,” which relies on a developer’s or subject matter expert’s knowledge to break a large problem into smaller chunks.

In November 2018, Amazon Web Services released its RoboMaker cloud robotics platform to give developers a centralized environment to build, test, and deploy robots with the cloud. Google also has a cloud robotics platform that was announced last year.

8. Aria Insights shuts down

Drone maker Aria Insights abruptly shut down on March 21. Formerly known as CyPhy Works, the company was primarily known for its Persistent Aerial Reconnaissance and Communications (PARC) platform, a tethered drone that provided secure communication and continuous flight to customers.

CyPhy Works rebranded as Aria Insights in January 2019 to focus more on using artificial intelligence and machine learning to help analyze data collected by drones. But it was too little too late.

CyPhy Works was founded in 2008 by Helen Greiner, who also co-founded iRobot in 1990. Greiner left CyPhy Works in 2017 and in June 2018 was named an advisor to the US Army for robotics, autonomous systems and AI.

Robotics Investments for First 6 Months of 2019

MonthInvestment Amount
January$644M
February$4.3B
March$1.3B
April$6.5B
May$1.5B
June$1.4B
Yearly Total$15.64B

9. Robotics investments

Investments into robotics companies have totaled more than $15.64 billion in the first half of 2019. Some of the leading markets investment-wise include healthcare robotics, logistics and manufacturing. But autonomous vehicles take the cake thus far. In June, for example, autonomous vehicles accounted for $717 million of the $1.4 billion that was invested into robotics companies.

Check out the table above for a month-by-month breakdown of robotics investments and follow our Investments Section for the latest news and analysis.

10. Johnson & Johnson acquired Auris Health

Johnson & Johnson (J&J) subsidiary Ethicon acquired Auris Health and its FDA-cleared Monarch platform for $3.4 billion. Auris is surgical robotics pioneer Dr. Fred Moll’s newest robotic surgical play. The acquisition is one of the 10 largest VC-backed, private M&A transactions of all-time and will be both the largest robotics and largest medtech private M&A deal in history. Kiva Systems previously held the title for largest robotics acquisition when it was purchased by Amazon for $775 million.

Auris’ robotic Monarch platform has FDA clearance for diagnostic and therapeutic bronchoscopic procedures. The system features a controller interface for navigating the integrated flexible robotic endoscope into the periphery of the lung and combines traditional endoscopic views with computer-assisted navigation based on 3D patient models. Auris said J&J’s global distribution will broaden access to the Monarch Platform.

The post Top 10 robotics stories during 1st half of 2019 appeared first on The Robot Report.

20 largest robotics investments during 1st half of 2019


robotics investments

An autonomous, all-electric Chevrolet Bolt from Cruise, which raised $1.15 billion in May 2019. | Credit: Cruise

Robotics companies raised more than $15.6 billion during the first half of 2019. According to the robotics investments tracked and verified by The Robot Report, more than $2.6 billion was raised on average per month. The year started slowly with $644 million raised in January, but there was at least $1.3 billion raised each month thereafter.

For The Robot Report‘s investment analysis, autonomous vehicles, including technologies that support autonomous driving, and drones are considered robots. On the other hand, 3D printers, CNC systems, and various types of “hard” automation are not.

Robotics Investments for First 6 Months of 2019

MonthInvestment Amount
January$644M
February$4.3B
March$1.3B
April$6.5B
May$1.5B
June$1.4B
Yearly Total$15.64B

As you can see in the table below, autonomous vehicle investments made up a significant percentage of overall funding. Ten of the top 20 robotics investments tracked by The Robot Report belonged to companies producing autonomous vehicles or autonomous vehicle enabling technologies. Autonomous vehicle companies raised 55% ($4.6 billion) of the total $8.2 billion raised in the 20 investments. The top three autonomous vehicle investments belonged to Cruise ($1.15 billion), Uber ($1 billion) and Nuro ($940 million), which raised a combined $3.1 billion.

Healthcare robotics companies have also fared well in 2019. Intuitive Surgical raised $2 billion via a stock repurchase in February, while Think Surgical and Ekso Bionics raised $134 million and $100 million, respectively. HistoSonics raised $54 million in April for its medical robotics platform that destroy cancerous tumors without affecting surrounding tissue.

The Robot Report will have a detailed breakdown of investments by sector in a follow-up article.

To stay updated about the latest robotics investments and acquisitions, check out The Robot Report‘s Investment Section.

20 Largest Robotics Investments During 1st Half of 2019

CompanyFunding (M$)Lead InvestorDateTechnology
Intuitive Surgical 2000Stock Repurchase2/1/19Surgical Robots
Cruise1150Honda Motor Corp.5/7/19Autonomous Vehicles
Uber ATG1000SoftBank Vision Fund4/18/19Autonomous Vehicles
Nuro.ai940SoftBank Vision Fund2/11/19Autonomous Vehicles
Horizon Robotics600SK China2/27/19AI/IOT
Aurora Innovation600Amazon2/7/19Autonomous Vehicles
Weltmeister Motor450Baidu Inc.3/11/19Autonomous Vehicles
Cloudminds300SoftBank Vision Fund3/26/19Service Robots
Zipline190TPG5/17/19Drone Delivery
Innoviz Technologies170China Merchants Capital3/26/19LiDAR
Think Surgical1343/11/19Surgical Robots
Beijing Auto AI Technology104Robert Bosch Venture Capital1/24/19AI
Black Sesame Technologies100Legend Capital4/15/2019Machine Learning
Ekso Bionics Holdings100Zhejiang Youchuang Venture Capital Investment Co.1/30/19Exoskeletons
TUSimple95Sina Corp2/13/19Autonomous Vehicles
Ouster60Runway Growth Capital3/25/19LiDAR
NASN Automotive59.6Matrix Partners China1/30/19Autonomous Vehicles
HistoSonics54Varian Medical4/8/19Medical Robots
Ike52Bain Capital Ventures2/5/19Autonomous Vehicles
Enflame43.4Redpoint China Ventures6/6/2019AI Chipmaker

Editors note: What defines robotics investments? The answer to this simple question is central in any attempt to quantify robotics investments with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing
Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies
Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, think, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification
Funding information is collected from a number of public and private sources. These include press releases from corporations and investment groups, corporate briefings, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded.

The post 20 largest robotics investments during 1st half of 2019 appeared first on The Robot Report.