Keys to using ROS 2 & other frameworks for medical robots

What is the best architectural approach to use when developing medical robots? Is it ROS, ROS 2 or other open-source or commercial frameworks? The upcoming Robotics Summit & Expo (May 10-11 in Boston) will explore engineering questions concerning the level of concern, risk, design controls, and evidence on a couple of different applications of these frameworks.

In a session on May 10 from 2-2:45 PM, Tom Amlicke, Software Systems Engineer, MedAcuity will discuss the “Keys to Using ROS 2 and Other Frameworks for Medical Robots.” Amlicke will look at three hypothetical robotic systems and explore these approaches:

  • 1. An application based on the da Vinci Research Kit through regulatory clearance
  • 2. ROS as test tools to verify the software requirements for a visual guidance system
  • 3. Commercial off-the-shelve robot arm used for a medical application

If you attend his session, you will also learn how to create trade-offs with these different architectural approaches and how to validate the intended uses of these architectural approaches to ensure a successful submission package for your FDA, EMA, or other regulatory approval.

Amlicke has 20-plus years of embedded and application-level development experience. He designs and deploys enterprise, embedded, and mobile solutions on Windows, Mac, iOS, and Linux/UNIX platforms using a variety of languages including C++. Amlicke takes a lead role on complex robotics projects, overseeing end-to-end development of ROS-based mobile robots and surgical robots.

You can find the full agenda for the Robotics Summit here. The Robotics Summit & Expo is the premier event for commercial robotics developers. There will be nearly 70 industry-leading speakers sharing their development expertise on stage during the conference, with 150-plus exhibitors on the showfloor showcasing their latest enabling technologies, products and services that help develop commercial robots. There also will be a career fair, networking opportunities and more. 

The post Keys to using ROS 2 & other frameworks for medical robots appeared first on The Robot Report.

5 top robotics trends to watch in 2023

The IFR's five global robotics trends with small icons representing them.

Robot installations hit an all-time high in 2021, with the International Federation of Robotics’ (IFR) data showing over 500,000 new industrial robots were installed that year. In North America, robot sales hit an all-time high for the second year in a row in 2022, according to the Associate for Advancing Automation (A3), bringing in $2.38 billion.

With such a rapidly growing industry, it can be difficult to keep track of all the ways it’s changing as more robots are out working. The IFR identified five trends it thinks will be big for the robotics industry in 2023.  

1. Energy efficiency

Energy efficiency is key to improving companies’ competitiveness amid rising energy costs. The adoption of robotics helps in many ways to lower energy consumption in manufacturing. Compared to traditional assembly lines, considerable energy savings can be achieved through reduced heating. At the same time, robots work at high speed thus increasing production rates so that manufacturing becomes more time- and energy-efficient. 

Today’s robots are designed to consume less energy, which leads to lower operating costs. To meet sustainability targets for their production, companies use industrial robots equipped with energy-saving technology: robot controls are able to convert kinetic energy into electricity, for example, and feed it back into the power grid. This technology significantly reduces the energy required to run a robot. Another feature is the smart power-saving mode that controls the robot´s energy supply on-demand throughout the workday. Since industrial facilities need to monitor their energy consumption even today, such connected power sensors are likely to become an industry standard for robotic solutions. 

2. Reshoring 

Resilience has become an important driver for reshoring in various industries: Car manufacturers e.g. invest heavily in short supply lines to bring processes closer to their customers. These manufacturers use robot automation to manufacture powerful batteries cost-effectively and in large quantities to support their electric vehicle projects. These investments make the shipment of heavy batteries redundant. This is important as more and more logistics companies refuse to ship batteries for safety reasons. 

Relocating microchip production back to the US and Europe is another reshoring trend. Since most industrial products nowadays require a semiconductor chip to function, their supply close to the customer is crucial. Robots play a vital role in chip manufacturing, as they live up to the extreme requirements of precision. Specifically designed robots automate the silicon wafer fabrication, take over cleaning and cleansing tasks or test integrated circuits. Recent examples of reshoring are Intel´s new chip factories in Ohio or the recently announced chip plant in the Saarland region of Germany run by chipmaker Wolfspeed and automotive supplier ZF.

3. Robots becoming easier to use

Robot programming has become easier and more accessible to non-experts. Providers of software-driven automation platforms support companies, letting users manage industrial robots with no prior programming experience. Original equipment manufacturers work hand-in-hand with low-code or even no-code technology partners that allow users of all skill levels to program a robot. 

The easy-to-use software paired with an intuitive user experience replaces extensive robotics programming and opens up new robotics automation opportunities: Software start-ups are entering this market with specialized solutions for the needs of small and medium-sized companies. For example, a traditional heavy-weight industrial robot can be equipped with sensors and new software that allows collaborative setup operation. This makes it easy for workers to adjust heavy machinery to different tasks. Companies will thus get the best of both worlds: robust and precise industrial robot hardware and state-of-the-art cobot software. 

Easy-to-use programming interfaces, that allow customers to set up the robots themselves, also drive the emerging new segment of low-cost robotics. Many new customers reacted to the pandemic in 2020 by trying out robotic solutions. Robot suppliers acknowledged this demand: Easy setup and installation, for instance, with pre-configured software to handle grippers, sensors or controllers support lower-cost robot deployment. Such robots are often sold through web shops and program routines for various applications are downloadable from an app store.

4. Artificial Intelligence (AI) and digital automation

Propelled by advances in digital technologies, robot suppliers and system integrators offer new applications and improve existing ones regarding speed and quality. Connected robots are transforming manufacturing. Robots will increasingly operate as part of a connected digital ecosystem: Cloud Computing, Big Data Analytics or 5G mobile networks provide the technological base for optimized performance. The 5G standard will enable fully digitalized production, making cables on the shop floor obsolete.

Artificial Intelligence (AI) holds great potential for robotics, enabling a range of benefits in manufacturing. The main aim of using AI in robotics is to better manage variability and unpredictability in the external environment, either in real-time, or offline. This makes AI-supporting machine learning play an increasing role in software offerings where running systems benefit, for example with optimized processes, predictive maintenance or vision-based gripping.

This technology helps manufacturers, logistics providers and retailers deal with frequently changing products, orders and stock. The greater the variability and unpredictability of the environment, the more likely it is that AI algorithms will provide a cost-effective and fast solution – for example, for manufacturers or wholesalers dealing with millions of different products that change on a regular basis. AI is also useful in environments in which mobile robots need to distinguish between the objects or people they encounter and respond differently.

5. Second life for industrial robots

Since an industrial robot has a service life of up to thirty years, new tech equipment is a great opportunity to give old robots a “second life”. Industrial robot manufacturers like ABB, Fanuc, KUKA or Yaskawa run specialized repair centers close to their customers to refurbish or upgrade used units in a resource-efficient way. This prepare-to-repair strategy for robot manufacturers and their customers also saves costs and resources. To offer long-term repair to customers is an important contribution to the circular economy.

The post 5 top robotics trends to watch in 2023 appeared first on The Robot Report.

How ChatGPT can control robots

a chatgpt prompt asking a robot to perform a block-building task

Microsoft researchers controlled this robotic arm using ChatGPT. | Credit: Microsoft

By now, you’ve likely heard of ChatGPT, OpenAI’s language model that can generate somewhat coherent responses to a variety of prompts and questions. It’s primarily being used to generate text, translate information, make calculations and explain topics you’re looking to learn about.

Researchers at Microsoft, which has invested billions into OpenAI and recently integrated ChatGPT into its Bing search engine, extended the capabilities of ChatGPT to control a robotic arm and aerial drone. Earlier this week, Microsoft released a technical paper that describes a series of design principles that can be used to guide language models toward solving robotics tasks.

“It turns out that ChatGPT can do a lot by itself, but it still needs some help,” Microsoft wrote about its ability to program robots.

Prompting LLMs for robotics control poses several challenges, Microsoft said, such as providing a complete and accurate description of the problem, identifying the right set of allowable function calls and APIs, and biasing the answer structure with special arguments. To make effective use of ChatGPT for robotics applications, the researchers constructed a pipeline composed of the following steps:

  • 1. First, they defined a high-level robot function library. This library can be specific to the form factor or scenario of interest and should map to actual implementations on the robot platform while being named descriptively enough for ChatGPT to follow.
  • 2. Next, they build a prompt for ChatGPT which described the objective while also identifying the set of allowed high-level functions from the library. The prompt can also contain information about constraints, or how ChatGPT should structure its responses.
  • 3. The user stayed in the loop to evaluate code output by ChatGPT, either through direct analysis or through simulation and provides feedback to ChatGPT on the quality and safety of the output code.
  • 4. After iterating on the ChatGPT-generated implementations, the final code can be deployed onto the robot.

Examples of ChatGPT controlling robots

In one example, Microsoft researchers used ChatGPT in a manipulation scenario with a robot arm. It used conversational feedback to teach the model how to compose the originally provided APIs into more complex high-level functions that ChatGPT coded by itself. Using a curriculum-based strategy, the model was able to chain these learned skills together logically to perform operations such as stacking blocks.

The model was also able to build the Microsoft logo out of wooden blocks. It was able to recall the Microsoft logo from its internal knowledge base, “draw” the logo as SVG code, and then use the skills learned above to figure out which existing robot actions can compose its physical form.

Researchers also tried to control an aerial drone using ChatGPT. First, they fed ChatGPT a rather long prompt laying out the computer commands it could write to control the drone. After that, the researchers could make requests to instruct ChatGPT to control the robot in various ways. This included asking ChatGPT to use the drone’s camera to identify a drink, such as coconut water and a can of Coca-Cola. It was also able to write code structures for drone navigation based solely on the prompt’s base APIs, according to the researchers.

“ChatGPT asked clarification questions when the user’s instructions were ambiguous and wrote complex code structures for the drone such as a zig-zag pattern to visually inspect shelves,” the team said.

Microsoft said it also applied this approach to a simulated domain, using the Microsoft AirSim simulator. “We explored the idea of a potentially non-technical user directing the model to control a drone and execute an industrial inspection scenario. We observe from the following excerpt that ChatGPT is able to effectively parse intent and geometrical cues from user input and control the drone accurately.”

Key limitation

The researchers did admit this approach has a major limitation: ChatGPT can only write the code for the robot based on the initial prompt the human gives it. A human engineer has to thoroughly explain to ChatGPT how the application programming interface for a robot works, otherwise, it will struggle to generate applicable code.

“We emphasize that these tools should not be given full control of the robotics pipeline, especially for safety-critical applications. Given the propensity of LLMs to eventually generate incorrect responses, it is fairly important to ensure solution quality and safety of the code with human supervision before executing it on the robot. We expect several research works to follow with the proper methodologies to properly design, build and create testing, validation and verification pipelines for LLM operating in the robotics space.

“Most of the examples we presented in this work demonstrated open perception-action loops where ChatGPT generated code to solve a task, with no feedback provided to the model afterwards. Given the importance of closed-loop controls in perception-action loops, we expect much of the future research in this space to explore how to properly use ChatGPT’s abilities to receive task feedback in the form of textual or special-purpose modalities.”

Microsoft said its goal with this research is to see if ChatGPT can think beyond text and reason about the physical world to help with robotics tasks.

“We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems. The key challenge here is teaching ChatGPT how to solve problems considering the laws of physics, the context of the operating environment, and how the robot’s physical actions can change the state of the world.”

The post How ChatGPT can control robots appeared first on The Robot Report.

Luxonis releases DepthAI ROS driver

Luxonis sensors

Luxonis offers high-resolution cameras with depth vision and on-chip machine learning. | Source: Luxonis

Luxonis announced the release of its newest DepthAI ROS driver for its stereo depth OAK cameras. The driver aims to make the development of ROS-based software easier. 

When using the DepthAI ROS driver, almost everything is parameterized with ROS2 parameters/dynamic reconfigure, which aims to provide more flexibility to help users customize OAK to their unique use cases. 

The DepthAI ROS driver is being developed on ROS2 Humble and ROS1 Noetic. This allows users to take advantage of ROS Composition/Nodelet mechanisms. The driver supports both 2D and spatial detection and semantic segmentation networkers. 

The driver offers several different modes that users can run their camera in depending on their use case. For example, users can use the camera to publish Spatial NN detections and publish RGBD pointcloud. Alternatively, with the DepthAI ROS driver users can stream data straight from sensors for host processing, calibration and modular camera setup. 


Robotics Summit (May 10-11) returns to Boston

Register Today


With the driver, users can set parameters for things like exposure and focus for individual cameras at runtime and IR LED power for better depth accuracy and night vision. This allows users to experiment with onboard depth filter parameters. 

The driver enables encoding to get more bandwidth with compressed images and provides an easy way to integrate a multi-camera setup. It also provides docker support for easy integration, users can build one themselves or use one from Luxonis’ DockerHub repository.

Users can also reconfigure their cameras quickly and easily using ‘stop’ and ‘start’ services. The driver also allows users to use low-quality streams and switch to higher quality when they need or switch between different neural networks to get their robot the data it needs.

Earlier this month, Luxonis announced a partnership with ams OSRAM. As part of the partnership, Luxonis will use OSRAM’s Belago 1.1 Dot Projector in its 3D vision solutions for automatic guided vehicles (AGVs), robots, drones and more.

The post Luxonis releases DepthAI ROS driver appeared first on The Robot Report.

A new approach to improve robot navigation in crowded environments

While robots have become increasingly advanced over the past few years, most of them are still unable to reliably navigate very crowded spaces, such as public areas or roads in urban environments. To be implemented on a large-scale and in the smart cities of the future, however, robots will need to be able to navigate these environments both reliably and safely, without colliding with humans or nearby objects.