SLAMcore spatial intelligence software now fully supports ROS 2

SlamCore cartoon of robots looking at a map before entering a warehouse

SLAMcore enables robots to understand their environment and maintain localization within a map.

SLAMcore’s spatial intelligence software and SDK is now fully compatible with ROS 2. The Robot Operating System (ROS) is an open-source collection of software frameworks for robotics development. SLAMcore also supports ROS 1, allowing developers to integrate vision-based SLAM software into a variety of robots.

SLAMcore’s vision-based SLAM allows full 3D mapping and path planning within ROS 2 and supports the development of semantic mapping to add understanding of objects within a map. The company said its algorithms make use of several enhancements in ROS 2, specifically the upgraded Nav 2 stack that supports fully autonomous navigation and enhanced support for embedded processors.

Founded in 2016, London-based SLAMcore summed up the benefits of supporting ROS 2 as follows:

  • Enhanced SLAM efficiency for better memory and processor utilization: providing accurate, real-time position (in 6 degrees of freedom) running locally on minimal compute/memory that frees up compute/memory for product capabilities.
  • Full 3D mapping and path planning: offering accurate, dense, 3D voxel-based maps for accurate maps of the robot’s surroundings for navigation purposes.
  • Potential for semantic object maps: providing access to future SLAMcore capabilities including semantic object identification and labelling within maps.

“Our customers are looking to deploy robots in real-world and at-scale situations and are turning to vision-based SLAM systems for efficient mapping, location and positioning,” said SLAMcore CEO Owen Nicholson. “Integrating SLAMcore’s leading spatial intelligence with ROS 2 designs is a straightforward and highly cost-effective approach for them to quickly address complex SLAM challenges and move projects forward faster.”

Related: Overcoming the robotics tower of babel

The SLAMcore SDK, with support for ROS, ROS 2 and C++ interfaces is available now. It can be downloaded from SLAMcore.com and deployed with standard hardware. A wide range of hardware and bespoke application set-ups are supported by SLAMcore’s engineers and next-generation capabilities are being explored at SLAMcore Labs.

The post SLAMcore spatial intelligence software now fully supports ROS 2 appeared first on The Robot Report.

Developing open-source systems for first responder legged robots

open source bipedal robots university of michiganDigit, a bipedal robot from Agility Robotics, being tested at the University of Michigan. | Photo Credit: Joseph Xu/University of Michigan Engineering

Tomorrow’s wildfire fighters and other first responders may tag-team with robotic assistants that can hike through wilderness areas and disaster zones, thanks to a University of Michigan research project funded by a new $1 million grant from the National Science Foundation.

A key goal of the three-year project is to enable robots to navigate in real time, without the need for a pre-existing map of the terrain they’re to traverse. The project aims to take bipedal (two-legged) walking robots to a new level, equipping them to adapt on the fly to treacherous ground, dodge obstacles or decide whether a given area is safe for walking. The technology could enable robots to go into areas that are too dangerous for humans, including collapsed buildings and other disaster areas. It could also lead to prosthetics that are more intuitive for their users.

“I envision a robot that can walk autonomously through the forest here on North Campus and find an object we’ve hidden. That’s what’s needed for robots to be useful in search and rescue, and no robot right now can do it,” said Jessy Grizzle, principal investigator on the project and the Elmer G. Gilbert Distinguished University Professor of Engineering at U-M.

Grizzle, an expert in legged robots, is partnering on the project with Maani Ghaffari Jadidi, an assistant professor of naval architecture and marine engineering and expert in robotic perception. Grizzle says the pair’s complementary areas of expertise will enable them to work on broader swathes of technology than has been possible in the past.

To make it happen, the team will embrace an approach called “full-stack robotics,” integrating a series of new and existing pieces of technology into a single, open-source perception and movement system that can be adapted to robots beyond those used in the project itself. The technology will be tested on Digit and Mini Cheetah robots.

“What full-stack robotics means is that we’re attacking every layer of the problem at once and integrating them together,” Grizzle said. “Up to now, a lot of roboticists have been solving very specific individual problems. With this project, we aim to integrate what has already been done into a cohesive system, then identify its weak points and develop new technology where necessary to fill in the gaps.”

mini cheetah robot university of michigan

A Mini-Cheetah robot at The University of Michigan. | Photo Credit: Robert Coelius, University of Michigan Engineering.

One area of particular focus will be mapping – the project aims to find ways for robots to develop rich, multidimensional maps based on real-time sensory input so that they can determine the best way to cover a given patch of ground.

“When we humans go hiking, it’s easy for us to recognize areas that are too difficult or dangerous and stay away,” Ghaffari said. “We want a robot to be able to do something similar by using its perception tools to build a real-time map that looks several steps ahead and includes a measure of walkability. So it will know to stay away from dangerous areas, and it will be able to plan a route that uses its energy efficiently.”

Grizzle predicts that legged robots will be able to do this using math – for example, by calculating a standard deviation of ground height variation or how slippery a surface is. He plans to build more sophisticated perceptual tools that will help robots gather data by analyzing what their limbs are doing—a slip on an icy surface or a kick on a mogul, for example, would generate a new data point. The system will also help robots navigate loose ground and moving objects, such as rolling branches.

Rich and easily understandable maps, Ghaffari explained, will be equally important to the humans who may one day be operating those robots remotely in search-and-rescue operations or other applications.

“A shared understanding of the environment between humans and robots is essential, because the more a human team can see, the better they can interpret what the robot team is trying to accomplish,” Ghaffari said. “And that can help humans to make better decisions about what other resources need to be brought in or how the mission should proceed.”

Agility Robotics, developer of the Digit robot, recently released the video below showcasing how its humanoid robots are being tested in warehousing applications.

The post Developing open-source systems for first responder legged robots appeared first on The Robot Report.

Semantic SLAM navigation targets last-mile delivery robots

last-mile delivery robots

Last-mile delivery robots could use an MIT algorithm to find the front door, using environmental clues. | Credit: MIT

In the not too distant future, last-mile delivery robots may be to drop your takeout order, package, or meal-kit subscription at your doorstep – if they can find the door.

Standard approaches for robotic navigation involve mapping an area ahead of time, then using algorithms to guide a robot toward a specific goal or GPS coordinate on the map. While this approach might make sense for exploring specific environments, such as the layout of a particular building or planned obstacle course, it can become unwieldy in the context of last-mile delivery robots.

Imagine, for instance, having to map in advance every single neighborhood within a robot’s delivery zone, including the configuration of each house within that neighborhood along with the specific coordinates of each house’s front door. Such a task can be difficult to scale to an entire city, particularly as the exteriors of houses often change with the seasons. Mapping every single house could also run into issues of security and privacy.

Now MIT engineers have developed a navigation method that doesn’t require mapping an area in advance. Instead, their approach enables a robot to use clues in its environment to plan out a route to its destination, which can be described in general semantic terms, such as “front door” or “garage,” rather than as coordinates on a map. For example, if a robot is instructed to deliver a package to someone’s front door, it might start on the road and see a driveway, which it has been trained to recognize as likely to lead toward a sidewalk, which in turn is likely to lead to the front door.

Related: Delivery tests combine autonomous vehicles, bipedal robots

The new technique can greatly reduce the time last-mile delivery robots spend exploring a property before identifying its target, and it doesn’t rely on maps of specific residences.

“We wouldn’t want to have to make a map of every building that we’d need to visit,” says Michael Everett, a graduate student in MIT’s Department of Mechanical Engineering. “With this technique, we hope to drop a robot at the end of any driveway and have it find a door.”

Everett presented the group’s results at the International Conference on Intelligent Robots and Systems. The paper, which is co-authored by Jonathan How, professor of aeronautics and astronautics at MIT, and Justin Miller of the Ford Motor Company, is a finalist for “Best Paper for Cognitive Robots.”

“A sense of what things are”

In recent years, researchers have worked on introducing natural, semantic language to robotic systems, training robots to recognize objects by their semantic labels, so they can visually process a door as a door, for example, and not simply as a solid, rectangular obstacle.

“Now we have an ability to give robots a sense of what things are, in real-time,” Everett says.

Everett, How, and Miller are using similar semantic techniques as a springboard for their new navigation approach, which leverages pre-existing algorithms that extract features from visual data to generate a new map of the same scene, represented as semantic clues, or context.

In their case, the researchers used an algorithm to build up a map of the environment as the robot moved around, using the semantic labels of each object and a depth image. This algorithm is called semantic SLAM (Simultaneous Localization and Mapping).

While other semantic algorithms have enabled robots to recognize and map objects in their environment for what they are, they haven’t allowed a robot to make decisions in the moment while navigating a new environment, on the most efficient path to take to a semantic destination such as a “front door.”

“Before, exploring was just, plop a robot down and say ‘go,’ and it will move around and eventually get there, but it will be slow,” How says.

The cost to go

The researchers looked to speed up a robot’s path-planning through a semantic, context-colored world. They developed a new “cost-to-go estimator,” an algorithm that converts a semantic map created by pre-existing SLAM algorithms into a second map, representing the likelihood of any given location being close to the goal.

“This was inspired by image-to-image translation, where you take a picture of a cat and make it look like a dog,” Everett says. “The same type of idea happens here where you take one image that looks like a map of the world, and turn it into this other image that looks like the map of the world but now is colored based on how close different points of the map are to the end goal.”

This cost-to-go map is colorized, in gray-scale, to represent darker regions as locations far from a goal, and lighter regions as areas that are close to the goal. For instance, the sidewalk, coded in yellow in a semantic map, might be translated by the cost-to-go algorithm as a darker region in the new map, compared with a driveway, which is progressively lighter as it approaches the front door — the lightest region in the new map.

The researchers trained this new algorithm on satellite images from Bing Maps containing 77 houses from one urban and three suburban neighborhoods. The system converted a semantic map into a cost-to-go map, and mapped out the most efficient path, following lighter regions in the map, to the end goal. For each satellite image, Everett assigned semantic labels and colors to context features in a typical front yard, such as grey for a front door, blue for a driveway, and green for a hedge.

During this training process, the team also applied masks to each image to mimic the partial view that a robot’s camera would likely have as it traverses a yard.

“Part of the trick to our approach was [giving the system] lots of partial images,” How explains. “So it really had to figure out how all this stuff was interrelated. That’s part of what makes this work robustly.”

The researchers then tested their approach in a simulation of an image of an entirely new house, outside of the training dataset, first using the preexisting SLAM algorithm to generate a semantic map, then applying their new cost-to-go estimator to generate a second map, and path to a goal, in this case, the front door.

The group’s new cost-to-go technique found the front door 189 percent faster than classical navigation algorithms, which do not take context or semantics into account, and instead spend excessive steps exploring areas that are unlikely to be near their goal.

Everett says the results illustrate how robots can use context to efficiently locate a goal, even in unfamiliar, unmapped environments.

“Even if a robot is delivering a package to an environment it’s never been to, there might be clues that will be the same as other places it’s seen,” Everett says. “So the world may be laid out a little differently, but there’s probably some things in common.”

This research is supported, in part, by the Ford Motor Company.

Editor’s Note: This article was republished with permission from MIT News.

The post Semantic SLAM navigation targets last-mile delivery robots appeared first on The Robot Report.