Soft robots harness viscous fluids for complex motions

One of the virtues of untethered soft robots is their ability to mechanically adapt to their surroundings and tasks, making them ideal for a range of roles, from tightening bolts in a factory to conducting deep-sea exploration. Now they are poised to become even more agile and controlled.

Magnetic microrobot can measure both cell stiffness and traction

Scientists have developed a tiny mechanical probe that can measure the inherent stiffness of cells and tissues as well as the internal forces the cells generate and exert on one another. Their new "magnetic microrobot" is the first such probe to be able to quantify both properties, the researchers report, and will aid in understanding cellular processes associated with development and disease.

A continuum robot inspired by elephant trunks

Conventional robots based on separate joints do not always perform well in complex real-world tasks, particularly those that involve the dexterous manipulation of objects. Some roboticists have thus been trying to devise continuum robots, robotic platforms characterized by infinite degrees of freedom and no fixed number of joints.

Teaching old robots new tricks

Robots, and in particular industrial robots, are programmed to perform certain functions. The Robot Operating System (ROS) is a very popular framework that facilitates the asynchronous coordination between a robot and other drives and/or devices. ROS has been a go-to means to enable the development of advanced capability across the robotics sector.

Southwest Research Institute (SwRI) and the ROS-I community often develop applications in ROS 2 , the successor to ROS 1. In many cases, particularly where legacy application code is utilized bridging back to ROS 1 is still very common, and one of the challenges in supporting the adoption of ROS for industry. This post does not aim to explain ROS, or any of the journey to migrating to ROS 2 in detail, but if interested as a reference, I invite you to read the following blogs by my colleagues, and our partners at Open Robotics/Open Source Robotics Foundation.

Giving an old robot a new purpose

Robots have been manufactured since the 1950s and, logically, over time there are newer versions with better properties and performance than their ancestors. And this is where the question comes in: how can you give the capability to those older but still functional robots?

This is becoming a more important question as the circular economy has gained momentum and understanding of the carbon footprint impact of the manufacture of robots that can be offset by reusing a functional robot. Each robot has its own capabilities and limitations and those must be taken into account. However, the question of “can I bring new life to this old robot?” always comes up, and this exact use case came up recently here at SwRI.

Confirming views of the camera to robot calibration. | Credit: ROS Industrial

In the lab, an older Fanuc robot seemed to be a good candidate to set up a system that could demonstrate basic Scan-N-Plan capabilities in an easy-to-digest way with this robot that would be constantly available for testing and demonstrations. The particular system was a demo unit from a former integration company and included an inverted Fanuc robot manufactured in 2008.

The demo envisioned for this system would be a basic Scan-N-Plan implementation that would locate and execute the cleaning of a mobile phone screen. Along the way, we encountered several obstacles that are described below.

Driver updates

Let’s talk first about the drivers. A driver is a software component that lets the operating system and a device communicate with each other. Each robot has its own drivers to properly communicate with whatever is going to instruct it on how to move. So when speaking of drivers, the handling of that is different from a computer’s driver to a robot’s driver. This is because a computer’s driver can be updated faster and easier than that of a robot.

When device manufacturers identify errors, they create a driver update that will correct them. In computers, you will be notified if a new update is available, you can accept the update and the computer will start updating. But in the world of industrial robots, including the Fanuc in the lab here, you need to manually upload the driver and the supporting software options to the robot controller. Once the driver software and options are installed, a fair amount of testing is needed to understand what the changes you made to the robot impacted elsewhere in the system. In certain situations, you may receive a robot with the required options needed to facilitate external system communication, however, it is always advised to check and confirm functionality.

With the passing of time, the robot will not communicate as fast as newer versions of the same model. So to obtain the best results, you will want to try to update your communication drivers, if available. The Fanuc robot comes with a controller that lets you operate it manually, via a teach pendant that is in the user’s hand at all times. It can be set to automatic and it will do what it has instructed via a simple cycle start. But all safety systems need to be functional and in the proper state for the system to operate.

The rapid position report of the robot’s state is very important for the computer’s software (in this case our ROS application) to know where the robot is and if it is performing the instructions correctly. This position is commonly known as the robot pose. For robotic arms, the information can be separated by joint states, and your laptop will probably have an issue with the old robot due to reporting these joint states at a slower speed while in auto mode than the ROS-based software on the computer expects. One way to solve this slow reporting is to update the drivers or by adding the correct configurations for your program to your robot’s controller, but that is not always possible or feasible.

Updated location of the RGB-D camera in the Fanuc cell. | Credit: ROS-Industrial

Another way to make the robot move as expected is to calibrate the robot with an RGB-D camera. To accomplish this, you must place the robot in a strategic position so that most of the robot is visible by the camera. Then view the projection of the camera and compare it to the URDF, which is a file that represents the model of the robot in simulation. Having both representations, in Rviz for example, you can change the origin of the camera_link, until you see that the projection is aligned with the URDF.

For the Scan n’ Plan application, the RGB-D camera was originally mounted on part of the robot’s end effector. But when we encountered this joint state delay, the camera was changed to a strategic position on the roof of the robot’s enclosure where it could view the base and the Fanuc robot for calibration to the simulation model as can be seen in the photos below. In addition, we set the robot to manual mode, where the user needed to hold the controller and tell the robot to start with the set of instructions given by the developed ROS-based Scan-N-Plan generated program.

Where we landed and what I learned

While not as easy as a project on “This Old House,” you can teach an old robot new tricks. It is very important to know the control platform of your robot. It may be that a problem is not with your code but with the robot itself, so it is always good to make sure the robot, associated controller and software work well and then seek alternatives to enable that new functionality within the constraints of your available hardware.

Though not always efficient in getting to the solution, older robots can deliver value when you systematically design the approach and work within the constraints of your hardware, taking advantage of the tools available, in particular those in the ROS ecosystem.

About the Author

Bryan Marquez was an engineer intern in the robotics department at the Southwest Research Institute.

The post Teaching old robots new tricks appeared first on The Robot Report.

Luminar launches 3D mapping software

luminar

Luminar is launching 3D mapping software built off technology it acquired from Civil Maps. | Source: Luminar

Luminar, an automotive technology development company, is expanding its software offerings to include high-definition, 3D maps that update automatically and are built from production vehicles also powered by Luminar software and hardware.

Luminar is making use of the technology it picked up in the second quarter of 2022, when it acquired Civil Maps, a developer of LiDAR maps for automotive uses. The company is demonstrating its new 3D mapping technology platform at CES on Luminar-equipped vehicles that are roaming around Las Vegas.

Luminar already signed on its first mapping customer, which the company did not name. Luminar’s first mapping customer will use the company’s data to further improve its AI engine. The customer will also help to improve Luminar’s perception software.

Luminar’s other offerings include its Sentinal software stack for consumer vehicles, which is made up of the company’s LiDAR-based Proactive Safety hardware and software product, as well as its LiDAR-based Highway Automation software.


Two vehicle models featuring Luminar’s software are making their North American debut at CES this year. The Volvo Ex90, an all-electric SUV that uses the company’s software and hardware as standard on every vehicle, is being shown in the US for the first time. Luminar’s Iris LiDAR is integrated into the roof line of the vehicle.

Additionally, SAIC’s Rising Auto R7, which started production in China last October, also uses Luminar’s technology. SAIC is one of China’s largest automakers.

“2022 marked an inflection point for Luminar, as the first of its kind to move from R&D to production vehicles,” Austin Russell, founder and CEO of Luminar, said. “Our big bet on production consumer vehicles and enhancing, not replacing, the driver is starting to pay off big time. I expect Luminar to make a sweeping impact in 2023 as the automotive industry continues to converge with our roadmap.”

The post Luminar launches 3D mapping software appeared first on The Robot Report.