Linux Foundation launches Overture Maps Foundation

map data

The Overture Maps Foundation, created by the Linux Foundation, aims to help developers who build map services or use geospatial data. | Source: Overture Maps Foundation

The Linux Foundation announced it formed the Overture Maps Foundation, a collaborative effort to create interoperable open map data as a shared asset. The Overture Maps Foundation aims to strengthen mapping services worldwide and enable current and next-generation mapping products. These mapping services could be crucial to robotic applications like autonomous driving. 

Currently, companies developing and rolling out autonomous vehicles have to spend massive amounts of time and money meticulously mapping the cities they’re deploying in. Additionally, those companies have to continuously remap those cities to account for any changes in road work or traffic laws. 

The foundation is founded by Amazon Web Services (AWS), Meta, Microsoft and TomTom. Overture hopes to add more members in the future to include a wide range of signals and data inputs. Members of the foundation will combine their resources to create map data that is complete, accurate and refreshed as the physical world changes. The resulting data will be open and extensible under an open data license. 

“Mapping the physical environment and every community in the world, even as they grow and change, is a massively complex challenge that no one organization can manage. Industry needs to come together to do this for the benefit of all,” Jim Zemlin, executive director for the Linux Foundation, said. “We are excited to facilitate this open collaboration among leading technology companies to develop high quality, open map data that will enable untold innovations for the benefit of people, companies, and communities.”

The Overture Maps foundation aims to build maps using data from multiple sources, including Overture members, civic organizations and open data sources, and simplify interoperability by creating a system that links entities from different data sets to the same real-world entities. All data used by Overture will undergo validation to ensure there are no map errors, breakage or vandalism within the mapping data. 

Overture also aims to help drive the adoption of a common, structured and documented data schema to create an easy-to-use ecosystem of map data. Currently, developers looking to create detailed maps have to source and curate their data from disparate sources, which can be difficult and expensive. Not to mention, many datasets use different conventions and vocabulary to reference the same real-world entities. 

“Microsoft is committed to closing the data divide and helping organizations of all sizes to realize the benefits of data as well as the new technologies it powers, including geospatial data,” Russell Dicker, Corporate Vice President, Product, Maps and Local at Microsoft, said. “Current and next-generation map products require open map data built using AI that’s reliable, easy-to-use and interoperable. We’re proud to contribute to this important work to help empower the global developer community as they build the next generation of location-based applications.” 

Overture hopes to release its first datasets in the first half of 2023. The initial release will include basic layers including buildings, road and administrative information, but Overture plans to steadily add more layers like places, routing or 3D building data. 

The post Linux Foundation launches Overture Maps Foundation appeared first on The Robot Report.

2023 robotics predictions from industry experts


We already recapped the most memorable and most popular stories of 2022, as well as the major acquisitions. You can find all of The Robot Report‘s 2022 Year in Review coverage here.

With 2023 just underway, we asked some of the robotics industry’s leading minds to look to the future. Here’s what they’ll be keeping an eye on in 2023. This article will be updated if additional experts weigh in.


Ken Goldberg, professor, industrial engineering and operations research; UC Berkeley; William S. Floyd Jr. distinguished chair in engineering, UC Berkeley; co-founder & chief scientist, Ambi Robotics

Two of my predictions for 2022 were accurate (the rise of tactile sensing and Sim2Real), but the division of labor between robots and humans is still evolving. On the other hand, I didn’t anticipate the quantum leap in performance of Transformer architectures for Large Language Models (LLMs) such as Stable Diffusion and ChatGPT.

Here are three predictions for 2023:

Transformer architectures will have an increasing influence on robotics
LLMs learn by ingesting vast quantities of human-written text to set millions of weights in a transformer sequential network architecture. LLMs are not grounded in physical experience, but textual captions and images can be integrated to produce surprisingly interesting hybrid images. A recent project by Google researchers shows how LLMs can provide semantic links between human requests (“please help me clean up this spill”) and robot affordances (a sponge within reach). It’s not clear yet exactly how, but I think we’ll see Transformer architectures applied to robotics in 2023 using similarly large sample sizes, such as examples of driving that are being collected by Google, Cruise, Toyota, Tesla, and others.


ROS 2 will gain traction as a standard for industrial robots
The Open Source Robotics Corporation (OSRC) is dramatically revising the new version of ROS to make it much more reliable and compatible with industry standards. In December, Intrinsic, a division of Alphabet, acquired OSRC to combine forces and boost the speed, reliability, and security of this standard and to integrate the latest advances in software architectures and cloud computing. The process will take longer than one year, but I think we’ll see ROS 2 taken much more seriously by major robotics and automation companies in 2023.

Indoor farming using agricultural robotics will mature
Advances in LED lighting and hydroponics developed for “recreational” crops are being adopted by indoor farming centers located in large warehouses proximal to urban centers. Robotics can be used to monitor and fine-tune lighting and temperature to observe plant conditions, allowing fresh crops to be harvested every week. Indoor crops avoid pesticides and require far less water than traditional farming because there is little evaporation and almost no washing and local farms reduce transportation costs. I look forward to eating more spotless fresh lettuce and produce in 2023


Aaron Prather, director of robotics and autonomous systems program, ASTM International

2023 is going to be the year where there will be more opportunities for robotics research through numerous government programs. Two of the biggest will be via the Manufacturing Extension Program (MEP) and Manufacturing USA, which both are seeing boosts in no small part due to the CHIPS Act. Both programs will see massive increases in their federal funding. MEP will see an over 70% increase in funding, while Manufacturing USA will see a whopping nearly 500% increase in funding.

Much of the increase in funding for Manufacturing USA will be to open more institutes to join fellow existing Manufacturing USA organizations like the ARM Institute in Pittsburgh, CESMII in Los Angeles, and MxD in Chicago. These new institutions will focus on the semiconductor industry from materials to production to shipping. Each vertical will require automation and robotics research.

This does not mean the existing 16 Manufacturing USA institutions will go lacking. Not only are most of them seeing increases in their core funding, but they will also get funding from new sources to expand into more areas. All of this is going to lead to numerous more projects between industry, academics, and government.

Other organizations, like the National Science Foundation (NSF) and National Institutes of Health (NIH), are seeing increases in funding that could go into further robotics research.

One potential downside to this will be in what selection criteria the U.S. government puts on this R&D work. The growing concern about Chinese technologies, as it pertains to the Federal Government, could limit who can participate in these funding projects. In October, the U.S. Department of Defense made its ban on Chinese drone maker DJI official. DJI is now one of several dozen Chinese companies deemed to be too closely tied to China’s military for the U.S. Government.

However, the opportunities this funding will have for the robotics industry will be huge. The recent request by some of these institutions for more SMEs to participate, especially integrators and installers of automation equipment, shows how much this additional funding will have from the lab to the factory floor.


Argo AI

An Argo AI vehicle performing a driverless test ride in Austin. Argo AI shut down in October. | Source: Argo AI

William Sitch, chief business officer, MSA

Do you remember three years ago when the pandemic started, the workplace shut down, and the future of humanity was uncertain? It turns out that was a good time to raise money: U.S. VC investment in 2020 went up 15% over 2019. But 2021 was literally twice as nice for raising dollars – truly the golden age for starting companies!

In 2022, all that irrational exuberance came crashing down. Inflation roared, the fed raised, and the markets are blood red – the NASDAQ is down 35% at the time I’m writing this. VC funding is now back to pre-exuberance levels. Exits mostly stopped.

Amid the bad macro backdrop, robots and automation got crushed. Firms shuttered, good engineers were laid off, and lots of autonomy work product was wasted. Maybe these failed concepts were problematic, or early, or whatever, but individuals and the industry suffer when these things happen. So with a boom-bust cycle reverting us back to the mean, here are my predictions for robotics and autonomy businesses in 2023:

More pain
The macro picture just doesn’t look good. The fed will continue to raise rates and I think our current recession will continue through midyear. Layoffs to extend startup lifespan will continue. More shutdowns will happen. Some big names are teetering on the brink and will fail in 23H1. Fingers crossed for TuSimple.

More startups
AI is all the rage, AgTech is on a tear (farmland appreciated 14% from 2021), logistics automation is ROI-positive, and some failures will be the catalyst for new companies. Good ideas and bravery don’t just go away during times of pain, and there’s still money out there for raising. VC funding will be horrible in the first quarter but will accelerate through the end of the year.

Aggressive robotaxi expansion
I don’t buy the thesis that Argo’s shutdown was the end of AV. Cruise and Waymo have demonstrated product/market fit. Failures will happen, but vehicles will continue to get incrementally safer. GM says it will spend $2B on Cruise’s expansion into new markets with the purpose-built Origin. Alphabet’s Waymo One is also expanding. Zoox will launch. 2023 will be the first year robotaxis get mainstream awareness.

I just can’t with Tesla
$800B value destruction by a distracted CEO who needs a social media timeout. California, Euro- and U.S. federal and state regulators coming for Tesla FSD. Deaths attributed to driver error by last-millisecond autonomy disengagements. Traditional OEMs with positive brand equity showing up with L2+ and DMS. Headwinds for sure. Still, $18B net cash and $9B FCF in 2022 is remarkable. I’m out of my league here; I don’t know what’s going to happen and can’t make a prediction. It would be nice if they deployed radar, fused sensor data, and stopped running over mannequins.

Functionality becomes leading indicator of success
Restrictive ODDs, development during deployment and minimally-viable products that are too minimal – these things will cripple adoption, limit growth, and restrict funding. Robots that only work 95% of the time will cause companies to fail. Success will come to those who develop robust prototypes that work before mass deployment.

TL;DR I see 2023 as a second-half recovery story. I’d love to hear your feedback. Tell me how I’m wrong!



Deepu Talla, VP of embedded and edge computing, NVIDIA

Demand for intelligent robots will continue to grow: more industries embrace automation to address supply chain challenges and labor shortages. We see two key trends emerging as developing and deploying these new AI-based robots drives the need for advanced simulation technology that places them in realistic scenarios.

Millions of virtual proving grounds: photorealistic rendering and accurate physics modeling combined with the ability to simulate in parallel millions of instances of a robot on GPUs in the cloud will enable more robots to be trained and validated in virtual worlds. And generative AI techniques will make it easier to create highly realistic 3D simulation scenarios and further accelerate the adoption of simulation and synthetic data for developing more capable robots.

Expanding the horizon: the majority of robots today operate in constrained environments where there is minimal human activity. Advances in AI and edge computing will give robots multi-modal perception for better semantic understanding of their environments. Roboticists will be able to teach robots to perform increasingly complex tasks while making them faster, flexible and safer to operate in collaboration with humans in dynamic environments. This will drive increased adoption in brownfield facilities and public spaces such as hospitals, hotels, retail stores and more.


rust linux

Several folks from Tangram Vision, a startup that helps robotics companies solve perception challenges, sent us there thoughts.

Brandon Minor, CEO & co-founder
ROS 2 will eclipse ROS as the platform of choice for roboticists. This is partially due to the looming deprecation of ROS, but also due to the fact that ROS 2 has seen significant development on the part of the robotics community that has made it much more tenable as a solution.

There will be a lot more consolidation in the world of autonomous vehicles. Despite a flight to more constrained ODDs, there are likely a number of AV startups that may still find themselves requiring investment. A negative narrative around AVs, coupled with ambivalent investors, will force them to sell, merge, or close their doors, unfortunately.

Julie Matheney, director of marketing
Thermal cameras will transition from an exotic sensor choice to a typical sensor choice. As a result, we’ll see them as part of the sensor array on many more robotic and autonomous vehicle platforms in 2023.

Adam Rodnitzky, COO & co-founder
We’ll see attempts to use generative tools to create robotics code like ROS nodes. These initial attempts won’t be that successful, but they’ll be the first step towards generative code finding its way into the world of robotics.

Jeremy Steward, senior perception architect
Rust uptake will increase for robotics and AV companies. The linux kernel just released a version with Rust in it, and more and more userspace libraries for working with ROS 2 over Rust are now available.

Robotics companies will be bearish on the hiring front. They will instead expect existing engineering teams to output more code with less resources.


Joel Carter, chief marketing officer, Softeq; managing partner, Softeq Venture Fund

As contract developers for some of the world’s top tech and robotics companies, Softeq views 2023 as a breakout year for robotics in the 3D internet. Metaverse applications are real and now widely available for virtual robot design, creation, programming, and testing thanks to new tools like Omniverse from NVIDIA. The platform is compatible with the popular open-source Robot Operating System (ROS) and includes a terrific physics engine explicitly tuned for industrial automation applications. We’ll also continue seeing the widespread availability of edge and cloud AI/ML algorithms to make machines even faster, smarter, and more intuitive.


Keith Pfeifer, president, Aerobotix

Increased job satisfaction and retention for humans working with robots
Especially for jobs that humans find dull, dangerous and dirty, robots will continue to lessen the burden of performing these tasks. As a result, humans will be freed to perform jobs that are more interesting, including supervising the robots.

There’s been a fear that the growing use of robots will cause more unemployment, but to the contrary, the World Economic Forum believes there will be a net positive of 12 million jobs created for humans by the year 2025.

A reduction in workplace-related injuries and diseases
The EPA recently released a new human health assessment for hexavalent chromium, which can cause cancer and is just one of the many chemical compounds that workers are often exposed to during industrial processes. Robots exposed to contaminant substances obviously can’t contract the same kinds of diseases that humans can, nor can they suffer the many injuries that humans can and do in the workplace.

Workplace-related injuries and diseases will decrease for robot-friendly companies, and both employees and employers will be better off for it.

Companies using robots will save tremendous amounts of money
While the upfront costs to install automated systems can be significant, organizations that embrace robot technology will achieve major cost savings as a result of improved labor and time efficiencies. They’ll also have a safer work environment, which means lower insurance costs and less exposure to civil or criminal liability.

The post 2023 robotics predictions from industry experts appeared first on The Robot Report.

Watch a Cassie bipedal robot run 100 meters

Cassie, a bipedal robot developed at the Oregon State University (OSU) College of Engineering and produced by OSU-spinout company Agility Robotics, recently ran 100 meters with no falls in 24.73 seconds at OSU’s Whyte Track and Field Center. The robot established a Guinness World Record for the fastest 100 meters by a bipedal robot. 

The bipedal robot’s average speed was just over 4 m/s, slightly slower than its top speed because it started from a standing position and returned to that position after the sprint, a challenging aspect of developing Cassie, according to the researchers behind the robot. 

“Starting and stopping in a standing position are more difficult than the running part, similar to how taking off and landing are harder than actually flying a plane,” OSU AI Professor and collaborator on the project Alan Fern said. “This 100-meter result was achieved by a deep collaboration between mechanical hardware design and advanced artificial intelligence for the control of that hardware.”

Agility Robotics co-founder and CEO Damion Shelton will be keynoting RoboBuiness, which runs Oct. 19-20 in Santa Clara and is produced by WTWH Media, the parent company of The Robot Report. On Oct. 20 from 9-9:45 AM, Shelton will deliver a keynote called “Building Human-Centric Robots for Real-World Tasks.” Agility Robotics will also demo Digit during the session, as well as on the expo floor, and tease the next version of Digit that is due out this fall.

Cassie has knees that bend like an ostrich’s, the fastest-running bird on the planet with the ability to run about 43 mph, and no cameras or external sensors, meaning the robot is blind to its environment and is not autonomous. 

Since Cassie’s introduction in 2017, OSU students have been exploring machine learning options in Oregon State’s Dynamic Robotics and AI Lab, where Cassie has been working to. learn how to run, walk and even go up and down stairs. To develop its robot control, the Dynamic Robotics and AI Lab melded physics with AI approaches that are typically used with data and simulation. 

The team compressed Cassie’s simulated training, which is equivalent to a year, to just a week by using a computing technique called parallelization in which multiple processes and calculations happen at the same time. This allows Cassie to go through a range of training experiences simultaneously. 

In 2021, Cassie traveled 5 kilometers in just over 53 minutes across OSU’s campus, untethered and on a single battery charge. During the run, Cassie used machine learning to control a running gait on outdoor terrain.

The bipedal robot was developed under the direction of Oregon State robotics professor and chief technology officer and co-founder at Agility Robotics Jonathan Hurst with a 16-month, $1 million grant from the Defense Advanced Research Projects Agency (DARPA) and additional funding from the National Science Foundation. 

“This may be the first bipedal robot to learn to run, but it won’t be the last,” Hurst said. “I believe control approaches like this are going to be a huge part of the future of robotics. The exciting part of this race is the potential. Using learned policies for robot control is a very new field, and this 100-meter dash is showing better performance than other control methods. I think progress is going to accelerate from here.”

The post Watch a Cassie bipedal robot run 100 meters appeared first on The Robot Report.

Top 10 robotics stories of September 2022

Big acquisitions, bipedal robots and an FTC investigation captured your attention in September. 

Here are the 10 most popular robotics stories on The Robot Report in September. Subscribe to The Robot Report Newsletter to stay updated on the robotics stories you need to know about.


diagram showing architecture of a robot vacuum cleaner

10. Sensor breakdown: how robot vacuums navigate

Over the past few years, robot vacuums have advanced immensely. Initial models tended to randomly bump their way around the room, often missing key areas on the floor during their runtime. Since those early days, these cons have turned into pros with the innovative use of sensors and motor controllers in combination with dedicated open-source software and drivers. Here is a look at some of the different sensors used in today’s robot vacuums for improved navigation and cleaning. Read More


combined image of the AMD instinct and NVIDIA chip overlayed with CHINA FLAG9. How AI chipset bans could impact Chinese robotics companies

NVIDIA and AMD said that the United States government has ordered them to halt exports of certain AI chipsets to China, which is the world’s second-largest economy. Both companies now require licenses for the sale of AI chipsets to China. Read More


dooson cobot in a manufacturing use case

8. Doosan Robotics signs cobot distributor in Northeast

Doosan Robotics formed a strategic partnership with Industrial Automation Supply (IAS) in Portland, Maine. IAS will serve as a partner and reseller of Doosan’s M, H and A-SERIES collaborative robotic arms across the Northeast. Doosan’s four M-SERIES cobot models are all equipped with six torque sensors – one in each joint. The models have a working radius of 900 to 1,700 millimeters and a payload capacity of 6 to 15 kilograms. Read More


7. Will Tesla’s Optimus robot be transformative?

Let’s be frank, Optimus feels a bit dystopian, as if we’re all going to be eminently replaced by a sleek, slender, cold electronic robot. It feels like Optimus inhabits a world of beautiful black and white design, while the rest of us get to drive around in stainless-steel Cybertrucks overseeing our hole-drilling operations on Mars. Read More


osu bipedal robot6. Watch a Cassie bipedal robot run 100 meters

Cassie, a bipedal robot developed at the Oregon State University (OSU) College of Engineering and produced by OSU-spinout company Agility Robotics, recently ran 100 meters with no falls in 24.73 seconds at OSU’s Whyte Track and Field Center. The robot established a Guinness World Record for the fastest 100 meters by a bipedal robot. Read More


Cloostermans legacy process production machine5. Amazon acquiring warehouse robotics maker Cloostermans

Amazon is continuing its acquisitions streak. Amazon has agreed to acquire Cloostermans, a Belgium-based company that specializes in mechatronics. Cloostermans has been selling products to Amazon since at least 2019, including technology Amazon uses in its operation to move and stack heavy pallets and totes and robots to package products for customer orders. Read More


MAXXgrip gripper

4. The Gripper Company launches MAXXgrip

The Gripper Company officially launched MAXXgrip, its first gripper solution designed specifically for warehouse and logistics applications. The new MAXXgrip gripper uses a vacuum and four soft fingers that move to solve the problems robot grippers have with handling pieces in warehouse picking and sorting jobs where there are a lot of different kinds of items to handle. An articulating vacuum gripper is used for initial item acquisition, then the fingers are deployed to stabilize the gripped item during the transfer by the robot. Read More


Amazon robot

3. Amazon testing pinch-grasping robots for e-commerce fulfillment

Robots picking items in Amazon’s warehouses need to be able to handle millions of different items of various shapes, sizes and weights. Right now, the company primarily uses suction grippers, which use air and a tight seal to lift items, but Amazon’s robotics team is developing a more flexible gripper to reliably pick up items suction grippers struggle to pick. Read More


irobot on the floor2. FTC investigating Amazon’s acquisition of iRobot

The Federal Trade Commission (FTC) has officially started an antitrust investigation into Amazon’s plans to acquire robot vacuum maker iRobot for $1.7 billion. Politico reports the FTC is investigating a number of potential issues. The FTC’s investigation will reportedly focus on whether the data provided by iRobot’s Roomba robot vacuum gives Amazon an unfair advantage in the retail industry. Read More


rust linux1. Linux embracing Rust will boost robotics community

Linux’s Benevolent Dictator For Life Linus Torvalds recently mentioned that the Rust programming language would be used in the upcoming Linux 6.1 kernel. Currently, the Linux kernel is at preview version 6.0-rc6 (codenamed “Hurr durr I’ma ninja sloth”) so we have a bit of time before we all have Rust powering the kernel, but the mere announcement is news-worthy. It’s the author’s opinion that this embrace of Rust at the very core of Linux will be a huge boost to the robotics community. Read More

The post Top 10 robotics stories of September 2022 appeared first on The Robot Report.

A system for automating robot design inspired by the evolution of vertebrates

Researchers at Kyoto University and Nagoya University in Japan have recently devised a new, automatic approach for designing robots that could simultaneously improve their shape, structure, movements, and controller components. This approach, presented in a paper published in Artificial Life and Robotics, draws inspiration from the evolution of vertebrates, the broad category of animals that possess a backbone or spinal column, which includes mammals, reptiles, birds, amphibians, and fishes.

Exoskeleton walks out into the real world

For years, the Stanford Biomechatronics Laboratory has captured imaginations with their exoskeleton emulators—lab-based robotic devices that help wearers walk and run faster, with less effort. Now, these researchers will turn heads out in the "wild" with their first untethered exoskeleton, featured in a paper published Oct. 12 in Nature.

Stanford researchers create robotic boot that helps people walk

Engineers at Stanford University have created a boot-like robotic exoskeleton that can increase walking speed and reduce walking effort in the real world outside of the lab. The team’s research was published in Nature

The exoskeleton gives users personalized walking assistance, allowing people to walk 9% faster and use 17% less energy per distance traveled. The energy savings and speed boost that the exoskeleton provides is equivalent to taking off a 30-pound backpack, according to the team. 

The goal is to help people with mobility impairments, especially older people, to more easily move throughout the world, and the Standford team believes that its technology will be ready for commercialization in the next few years. 

Using a motor that works with calf muscles, the robotic boot gives wearers an extra push with every step. The push is personalized using a machine learning-based model that was trained through years of work with emulators, or large, immobile and expensive lab setups that can rapidly test how to best assist people. 

Students and volunteers were hooked up to the exoskeleton emulators while researchers collected motion and energy expenditure data. This data helped the research team to understand how the way a person walks with the exoskeleton relates to how much energy they’re using. The team gained more details about the relative benefits of different kinds of assistance offered by the emulator, and used the information to inform a machine-learning model that the real-world exoskeleton now uses to adapt to each wearer. 

To adapt to an individual’s unique way of walking, the exoskeleton will provide a slightly different pattern of assistance each time the user walks. The exoskeleton then measures the resulting motion so that the machine learning model can determine how to better assist the user the next time they walk. In total, it takes the exoskeleton about an hour to customize its support to a new user. 

Moving forward, the Stanford researchers hope to test what the exoskeleton can do for its target demographic, older adults and people who are experiencing mobility decline from disability. The team also wants to plan design variations that target improving balance and reducing joint pain, and work with commercial partners to turn the device into a product. 

The post Stanford researchers create robotic boot that helps people walk appeared first on The Robot Report.