Watch a Cassie bipedal robot run 100 meters

Cassie, a bipedal robot developed at the Oregon State University (OSU) College of Engineering and produced by OSU-spinout company Agility Robotics, recently ran 100 meters with no falls in 24.73 seconds at OSU’s Whyte Track and Field Center. The robot established a Guinness World Record for the fastest 100 meters by a bipedal robot. 

The bipedal robot’s average speed was just over 4 m/s, slightly slower than its top speed because it started from a standing position and returned to that position after the sprint, a challenging aspect of developing Cassie, according to the researchers behind the robot. 

“Starting and stopping in a standing position are more difficult than the running part, similar to how taking off and landing are harder than actually flying a plane,” OSU AI Professor and collaborator on the project Alan Fern said. “This 100-meter result was achieved by a deep collaboration between mechanical hardware design and advanced artificial intelligence for the control of that hardware.”

Agility Robotics co-founder and CEO Damion Shelton will be keynoting RoboBuiness, which runs Oct. 19-20 in Santa Clara and is produced by WTWH Media, the parent company of The Robot Report. On Oct. 20 from 9-9:45 AM, Shelton will deliver a keynote called “Building Human-Centric Robots for Real-World Tasks.” Agility Robotics will also demo Digit during the session, as well as on the expo floor, and tease the next version of Digit that is due out this fall.

Cassie has knees that bend like an ostrich’s, the fastest-running bird on the planet with the ability to run about 43 mph, and no cameras or external sensors, meaning the robot is blind to its environment and is not autonomous. 

Since Cassie’s introduction in 2017, OSU students have been exploring machine learning options in Oregon State’s Dynamic Robotics and AI Lab, where Cassie has been working to. learn how to run, walk and even go up and down stairs. To develop its robot control, the Dynamic Robotics and AI Lab melded physics with AI approaches that are typically used with data and simulation. 

The team compressed Cassie’s simulated training, which is equivalent to a year, to just a week by using a computing technique called parallelization in which multiple processes and calculations happen at the same time. This allows Cassie to go through a range of training experiences simultaneously. 

In 2021, Cassie traveled 5 kilometers in just over 53 minutes across OSU’s campus, untethered and on a single battery charge. During the run, Cassie used machine learning to control a running gait on outdoor terrain.

The bipedal robot was developed under the direction of Oregon State robotics professor and chief technology officer and co-founder at Agility Robotics Jonathan Hurst with a 16-month, $1 million grant from the Defense Advanced Research Projects Agency (DARPA) and additional funding from the National Science Foundation. 

“This may be the first bipedal robot to learn to run, but it won’t be the last,” Hurst said. “I believe control approaches like this are going to be a huge part of the future of robotics. The exciting part of this race is the potential. Using learned policies for robot control is a very new field, and this 100-meter dash is showing better performance than other control methods. I think progress is going to accelerate from here.”

The post Watch a Cassie bipedal robot run 100 meters appeared first on The Robot Report.

A system for automating robot design inspired by the evolution of vertebrates

Researchers at Kyoto University and Nagoya University in Japan have recently devised a new, automatic approach for designing robots that could simultaneously improve their shape, structure, movements, and controller components. This approach, presented in a paper published in Artificial Life and Robotics, draws inspiration from the evolution of vertebrates, the broad category of animals that possess a backbone or spinal column, which includes mammals, reptiles, birds, amphibians, and fishes.

Exoskeleton walks out into the real world

For years, the Stanford Biomechatronics Laboratory has captured imaginations with their exoskeleton emulators—lab-based robotic devices that help wearers walk and run faster, with less effort. Now, these researchers will turn heads out in the "wild" with their first untethered exoskeleton, featured in a paper published Oct. 12 in Nature.

New walking robot design could revolutionize how we build things in space

Researchers have designed a state-of-the-art walking robot that could revolutionize large construction projects in space. They tested the feasibility of the robot for the in-space assembly of a 25m Large Aperture Space Telescope. They present their findings in Frontiers in Robotics and AI. A scaled-down prototype of the robot also showed promise for large construction applications on Earth.

Tiny, caterpillar-like soft robot folds, rolls, grabs and degrades

When you hear the term "robot," you might think of complicated machinery working in factories or roving on other planets. But "millirobots" might change that. They're robots about as wide as a finger that someday could deliver drugs or perform minimally invasive surgery. Now, researchers reporting in ACS Applied Polymer Materials have developed a soft, biodegradable, magnetic millirobot inspired by the walking and grabbing capabilities of insects.

Expressing and recognizing intentions in robots

The digital and physical worlds are becoming more and more populated by intelligent computer programs called agents. Agents have the potential to intelligently automate many daily tasks such as maintaining an agenda, driving, interacting with a phone or computer, and many more. However, there are many challenges to solve before getting there. One of them is that agents need to recognize and express intentions, Michele Persiani shows in his thesis in computing science at Umeå University.

Sensor breakdown: how robot vacuums navigate

block diagram robot vacuum

An example diagram block for a robot vacuum. | Credit: Invensense, a TDK company

Over the past few years, robot vacuums have advanced immensely. Initial models tended to randomly bump their way around the room, often missing key areas on the floor during their runtime. They also became trapped on thick rugs, and if vacuuming upstairs, came tumbling down with a heavy thud. Their runtime was also relatively short, and you’d often come home hoping for a nice and clean room only to discover that it had run out of juice halfway through.

Since those early days, these cons have turned into pros with the innovative use of sensors and motor controllers in combination with dedicated open-source software and drivers. Here is a look at some of the different sensors used in today’s robot vacuums for improved navigation and cleaning.

Ultrasonic time-of-flight sensors
Ultrasonic time-of-flight (ToF) sensors work in any lighting conditions and can provide millimeter-accurate range measurements independent of the target’s color and optical transparency. The sensor’s wide field-of-view (FoV) enables simultaneous range measurements of multiple objects. In a robot vacuum, they are used to detect if an object, such as a dog or children’s toy, is in its way and whether it needs to deviate its route to avoid a collision.

Short-range ultrasonic ToF sensors
Short-range ultrasonic ToF sensors can be used to determine different floor types. The application uses the average amplitude of a reflected ultrasonic signal to determine if the target surface is hard or soft. If the robot vacuum detects that it has moved from a carpet onto a hardwood floor, it can slow the motors down because they do not need to work as hard compared to carpet use.

The cliff detection feature can enable the robot vacuum to determine when it’s at the top of a set of stairs to prevent a fall.

VSLAM and LiDAR
Most companies developing high-end robot vacuums use visual simultaneous localization and mapping (VSLAM) or LiDAR technology to build a virtual map of the room. These technologies enable the robot vacuum to move around more efficiently, covering an entire level of a home with multiple rooms. However, if you lift the robot and put it down, it will not know its new location. To find out where it is, the robot must go off in a random direction and, once it detects an object and starts tracing the walls, it can find out where it is relevant to the map.

VSLAM or LiDAR technologies may not be applicable for low-light areas, for example, if the robot vacuum goes under a table or couch, where it is unable to read the map.

An example of the mapping capabilities of iRobot’s j7 robot vacuum. | Credit: iRobot

Inertial Measurement Units (IMU)
IMUs take the roll, pitch, and yaw of movements of the robot vacuum in the real world both from a linear and rotational perspective. When the robot vacuum is doing circles or moving in a straight line, it knows where it is supposed to go and how it is moving. There may be a slight error between where it should be and where it is, and the IMU can hold that position in a very accurate way.

Based on rotational and linear movement, plus the mapping of the room, the robot vacuum can determine that it is not going over the same areas twice and can pick up where it left off if the battery dies. And, if someone picks up the robot vacuum and places it somewhere else or turns it around, it can detect what is happening and know where it is in real space. The IMU is essential to making robot vacuums efficient.

For robot vacuums that do not use VSLAM or LiDAR mapping technology, their position and navigation can be determined using dead reckoning by combining measurements from the wheel’s rotations with the inertial measurements from the IMU and object detection from the ToF sensors.

Smart speaker microphones
As developers of robot vacuums continue to implement artificial intelligence (AI) with the ability to use voice assistants, microphones become an essential sensor technology. Take beamforming, for example. Beamforming is a type of radio frequency (RF) management technique that focuses the noise signal towards the microphone in combination with AI for tweaking. At the moment, the noise of the motors and the turning brushes on the robot vacuum are a bit loud. However, as microphone technology progresses and motors and brushes become quieter, coupled with beamforming, microphones will be able to determine the user’s voice in the not-too-distant future.

Algorithms can also be trained to disregard certain noises and listen specifically for the voice of the user. Ostensibly, the user wants to call for the vacuum cleaner to clear up something or tell it to go home without going through an app or voice assistant product. You want that to happen in real time inside the host processor of the robot vacuum. Alternatively, if the microphone notices that something is being spoken, it may be possible for the robot vacuum to stop all of its motors to listen to the command.

Embedded motor controllers
The embedded motor controllers are turning the gears to ensure the wheels are moving the robot vacuum in the correct direction with accuracy that can tell when the wheel is actually turned 90 degrees as opposed to 88 degrees. Without this high level of accuracy, the robot vacuum will be way off track after a certain amount of time. The embedded motor controller can be flexible whether you use sensors or not, making the robot vacuum scalable.

Pressure sensors
The level of dust inside the dust box is estimated by monitoring the flow of air through the dustbin with a pressure sensor. Compared to the air pressure when the dustbin is empty, the air pressure inside the dustbin begins to drop when the airflow begins to stagnate due to an increase in suction dust or clogging of the filter. However, for more accurate detection, it is recommended to detect it as a differential pressure that uses a similar pressure sensor to measure the outside air pressure.

A lot of the high-end bases have the capability to suck out the contents of the dust box automatically. The robot vacuum can then return to base, empty its contents, return to its last known position and continue cleaning.

Auto-recharging
To determine the battery’s state of charge (SoC), you need accurate current and voltage measurements. The coulomb counters and NTC thermistors in the battery pack provide this information.

When the battery reaches an arbitrary SoC level, the battery communicates an instruction for the robot vacuum to stop cleaning and return to the base for a recharge. When fully charged, the robot vacuum goes back to its last known position and continues cleaning. Regardless of the size of the room, in theory, with multiple chargers and multiple abilities to empty the dustbin, the robot vacuum can cover the entire floor space.

Thermistors
Thermistors, which are a type of temperature sensor, can be used to monitor the running temperature of the MCU or MPU. They can also be used to monitor the temperatures of the motors and brush gears. If they are running way too hot, the robot vacuum is instructed to take a break and perhaps run a few system diagnostics to find out what is causing the problem. Also, items caught in the brushes, like an elastic band or excess hair, can make the motors overcompensate and overheat.

Robot vacuum developers should understand what the motors are supposed to sound like at a certain threshold of frequency. It is possible to use a microphone to detect whether the motors are running abnormally, thereby detecting early stages of motor degradation. Again, by using diagnostics, the abnormal noise from the bushes could indicate that they have picked.

Conclusion
The retail price of a robot vacuum goes hand in hand with functionality and accuracy; some of the high-end models can be as much as $1,100. You can get a robot vacuum for closer to $200, but you will be sacrificing some of the bells and whistles. It all depends on the value the robot vacuum developer wants to create and the cost structure that works best for the user.

As component costs come down, it seems likely that more mid-tier robot vacuums will enter the market. Technologies like ToF sensors, pressure sensors, IMUs and motor controllers, along with improvements in battery efficiency, will drive this growth.

About the Author
For seven years, Peter Hartwell has been the chief technology officer at Invensense, a TDK company. He holds more than 40 patents and his operation oversees 600 engineers who have developed a broad range of technologies and sensors for drones, automotive, industrial and, more broadly, IoT. Hartwell has 25-plus years of experience commercializing silicon MEMS products, working on advanced sensors and actuators, and specializes in MEMS testing techniques.

Prior to joining InvenSense, he spent four years as an architect of sensing hardware at Apple where he built and led a team responsible for the integration of accelerometer, gyroscope, magnetometer, pressure, proximity, and ambient light sensors across the entire product line. Hartwell holds a B.S. in Materials Science from the University of Michigan and a Ph.D. in Electrical Engineering from Cornell University.

The post Sensor breakdown: how robot vacuums navigate appeared first on The Robot Report.