Teaching old robots new tricks

Robots, and in particular industrial robots, are programmed to perform certain functions. The Robot Operating System (ROS) is a very popular framework that facilitates the asynchronous coordination between a robot and other drives and/or devices. ROS has been a go-to means to enable the development of advanced capability across the robotics sector.

Southwest Research Institute (SwRI) and the ROS-I community often develop applications in ROS 2 , the successor to ROS 1. In many cases, particularly where legacy application code is utilized bridging back to ROS 1 is still very common, and one of the challenges in supporting the adoption of ROS for industry. This post does not aim to explain ROS, or any of the journey to migrating to ROS 2 in detail, but if interested as a reference, I invite you to read the following blogs by my colleagues, and our partners at Open Robotics/Open Source Robotics Foundation.

Giving an old robot a new purpose

Robots have been manufactured since the 1950s and, logically, over time there are newer versions with better properties and performance than their ancestors. And this is where the question comes in: how can you give the capability to those older but still functional robots?

This is becoming a more important question as the circular economy has gained momentum and understanding of the carbon footprint impact of the manufacture of robots that can be offset by reusing a functional robot. Each robot has its own capabilities and limitations and those must be taken into account. However, the question of “can I bring new life to this old robot?” always comes up, and this exact use case came up recently here at SwRI.

Confirming views of the camera to robot calibration. | Credit: ROS Industrial

In the lab, an older Fanuc robot seemed to be a good candidate to set up a system that could demonstrate basic Scan-N-Plan capabilities in an easy-to-digest way with this robot that would be constantly available for testing and demonstrations. The particular system was a demo unit from a former integration company and included an inverted Fanuc robot manufactured in 2008.

The demo envisioned for this system would be a basic Scan-N-Plan implementation that would locate and execute the cleaning of a mobile phone screen. Along the way, we encountered several obstacles that are described below.

Driver updates

Let’s talk first about the drivers. A driver is a software component that lets the operating system and a device communicate with each other. Each robot has its own drivers to properly communicate with whatever is going to instruct it on how to move. So when speaking of drivers, the handling of that is different from a computer’s driver to a robot’s driver. This is because a computer’s driver can be updated faster and easier than that of a robot.

When device manufacturers identify errors, they create a driver update that will correct them. In computers, you will be notified if a new update is available, you can accept the update and the computer will start updating. But in the world of industrial robots, including the Fanuc in the lab here, you need to manually upload the driver and the supporting software options to the robot controller. Once the driver software and options are installed, a fair amount of testing is needed to understand what the changes you made to the robot impacted elsewhere in the system. In certain situations, you may receive a robot with the required options needed to facilitate external system communication, however, it is always advised to check and confirm functionality.

With the passing of time, the robot will not communicate as fast as newer versions of the same model. So to obtain the best results, you will want to try to update your communication drivers, if available. The Fanuc robot comes with a controller that lets you operate it manually, via a teach pendant that is in the user’s hand at all times. It can be set to automatic and it will do what it has instructed via a simple cycle start. But all safety systems need to be functional and in the proper state for the system to operate.

The rapid position report of the robot’s state is very important for the computer’s software (in this case our ROS application) to know where the robot is and if it is performing the instructions correctly. This position is commonly known as the robot pose. For robotic arms, the information can be separated by joint states, and your laptop will probably have an issue with the old robot due to reporting these joint states at a slower speed while in auto mode than the ROS-based software on the computer expects. One way to solve this slow reporting is to update the drivers or by adding the correct configurations for your program to your robot’s controller, but that is not always possible or feasible.

Updated location of the RGB-D camera in the Fanuc cell. | Credit: ROS-Industrial

Another way to make the robot move as expected is to calibrate the robot with an RGB-D camera. To accomplish this, you must place the robot in a strategic position so that most of the robot is visible by the camera. Then view the projection of the camera and compare it to the URDF, which is a file that represents the model of the robot in simulation. Having both representations, in Rviz for example, you can change the origin of the camera_link, until you see that the projection is aligned with the URDF.

For the Scan n’ Plan application, the RGB-D camera was originally mounted on part of the robot’s end effector. But when we encountered this joint state delay, the camera was changed to a strategic position on the roof of the robot’s enclosure where it could view the base and the Fanuc robot for calibration to the simulation model as can be seen in the photos below. In addition, we set the robot to manual mode, where the user needed to hold the controller and tell the robot to start with the set of instructions given by the developed ROS-based Scan-N-Plan generated program.

Where we landed and what I learned

While not as easy as a project on “This Old House,” you can teach an old robot new tricks. It is very important to know the control platform of your robot. It may be that a problem is not with your code but with the robot itself, so it is always good to make sure the robot, associated controller and software work well and then seek alternatives to enable that new functionality within the constraints of your available hardware.

Though not always efficient in getting to the solution, older robots can deliver value when you systematically design the approach and work within the constraints of your hardware, taking advantage of the tools available, in particular those in the ROS ecosystem.

About the Author

Bryan Marquez was an engineer intern in the robotics department at the Southwest Research Institute.

The post Teaching old robots new tricks appeared first on The Robot Report.

Luminar launches 3D mapping software

luminar

Luminar is launching 3D mapping software built off technology it acquired from Civil Maps. | Source: Luminar

Luminar, an automotive technology development company, is expanding its software offerings to include high-definition, 3D maps that update automatically and are built from production vehicles also powered by Luminar software and hardware.

Luminar is making use of the technology it picked up in the second quarter of 2022, when it acquired Civil Maps, a developer of LiDAR maps for automotive uses. The company is demonstrating its new 3D mapping technology platform at CES on Luminar-equipped vehicles that are roaming around Las Vegas.

Luminar already signed on its first mapping customer, which the company did not name. Luminar’s first mapping customer will use the company’s data to further improve its AI engine. The customer will also help to improve Luminar’s perception software.

Luminar’s other offerings include its Sentinal software stack for consumer vehicles, which is made up of the company’s LiDAR-based Proactive Safety hardware and software product, as well as its LiDAR-based Highway Automation software.


Two vehicle models featuring Luminar’s software are making their North American debut at CES this year. The Volvo Ex90, an all-electric SUV that uses the company’s software and hardware as standard on every vehicle, is being shown in the US for the first time. Luminar’s Iris LiDAR is integrated into the roof line of the vehicle.

Additionally, SAIC’s Rising Auto R7, which started production in China last October, also uses Luminar’s technology. SAIC is one of China’s largest automakers.

“2022 marked an inflection point for Luminar, as the first of its kind to move from R&D to production vehicles,” Austin Russell, founder and CEO of Luminar, said. “Our big bet on production consumer vehicles and enhancing, not replacing, the driver is starting to pay off big time. I expect Luminar to make a sweeping impact in 2023 as the automotive industry continues to converge with our roadmap.”

The post Luminar launches 3D mapping software appeared first on The Robot Report.

Inuitive sensor modules bring VSLAM to AMRs

Inuitive

Inuitive introduces the M4.5S (center) and M4.3WN (right) sensor modules that add VSLAM for AMR and AGVs.

Inuitive, an Israel-based developer of vision-on-chip processors, launched its M4.5S and M4.3WN sensor modules. Designed to integrate into robots and drones, both sensor modules are built around the NU4000 vision-on-chip (VoC) processor adds depth sensing and image processing with AI and Visual Simultaneous Localization and Mapping (VSLAM) capabilities.

The M4.5S provides robots with enhanced depth from stereo sensing along with obstacle detection and object recognition. It features a field of view of 88×58 degrees, a minimum sensing range of 9 cm (3.54″) and a wide dynamic operating temperature range of up to 50 degrees Celsius (122 degrees Farenheit). The M4.5S supports the Robot Operating System (ROS) and has an SDK that is compatible with Windows, Linux and Android.

The M4.3WN features tracking and VSLAM navigation based on fisheye cameras and an IMU together with depth sensing and on-chip processing. This enables free navigation, localization, path planning, and static and dynamic obstacle avoidance for AMRs and AGVs. The M4.3WN is designed in a metal case to serve in industrial environments.

“Our new all-in-one sensor modules expand our portfolio targeting the growing market of autonomous mobile robots. Together with our category-leading vision-on-chip processor, we now enable robotic devices to look at the world with human-like visual understanding,” said Shlomo Gadot, CEO and co-founder of Inuitive. “Inuitive is fully committed to continuously developing the best performing products for our customers and becoming their supplier of choice.

The M4.5S and the M4.3WN sensor modules’ primary processing unit is Inuitive’s all-in-one NU4000 processor. Both modules are equipped with depth and RGB sensors that are controlled and timed by the NU4000. Data generated by the sensors and processed in real-time at a high frame rate by the NU4000, is then used to generate depth information for the host device.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

Researchers develop AV object detection system with 96% accuracy

A Waymo autonomous vehicle. | Source: Waymo

An international research team at the Incheon National University in South Korea has created an Internet-of-Things (IoT) enabled, real-time object detection system that can detect objects with 96% accuracy. 

The team of researchers created an end-to-end neural network that works with their IoT technology to detect objects with high accuracy in 2D and in 3D. The system is based on deep learning specialized for autonomous driving situations. 

“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” Professor Gwanggil Jeon, leader of the project, said. “We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.

The team fed RGB images and point cloud data as input to YOLOv3. The identification algorithm then outputs classification labels and bounding boxes and accompanying confidence scores. 

The researchers then tested the performance of their system with the Lyft dataset and found that YOLOv3 was able to accurately detect 2D and 3D objects more than 96% of the time. The team sees many potential uses for their technology, including for autonomous vehicles, autonomous parking, autonomous delivery and for autonomous mobile robots. 

“At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” Jeon said. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years.”

The team’s research was recently published in IEEE Transactions of Intelligent Transport SystemsAuthors on the paper include Jeon, Imran Ahmed, from Anglia Ruskin University’s School of Computing and. Information Sciences in Cambridge, and Abdellah Chehri, from the department of mathematics and computer science at the Royal Military College of Canada in Kingston, Canada. 

The post Researchers develop AV object detection system with 96% accuracy appeared first on The Robot Report.

Intel Labs introduces open-source simulator for AI

SPEAR creates photorealistic simulation environments that provide challenging workspaces for training robot behavior. | Credit: Intel

Intel Labs collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR). The result is a highly realistic, open-source simulation platform that accelerates the training and validation of embodied AI systems in indoor domains. The solution can be downloaded under an open-source MIT license.

Existing interactive simulators have limited content diversity, physical interactivity, and visual fidelity. This realistic simulation platform allows developers to train and validate embodied agents for growing tasks and domains.

The goal of SPEAR is to drive research and commercialization of household robotics through the simulation of human-robot interaction scenarios.

It took more than a year with a team of professional artists to construct a collection of high-quality, handcrafted, interactive environments. The SPEAR starter pack features more than 300 virtual indoor environments with more than 2,500 rooms and 17,000 objects that can be manipulated individually.

These interactive training environments use detailed geometry, photorealistic materials, realistic physics, and accurate lighting. New content packs targeting industrial and healthcare domains will be released soon.

The use of highly detailed simulation enables the development of more robust embodied AI systems. Roboticists can leverage simulated environments to train AI algorithms and optimize perception functions, manipulation, and spatial intelligence. The ultimate outcome is faster validation and a reduction in time-to-market.

In embodied AI, agents learn from physical variables. Capturing and collating these encounters can be time-consuming, labor-intensive, and risky. The interactive simulations provide an environment to train and evaluate robots before deploying them in the real world.

Overview of SPEAR

SPEAR is designed based on three main requirements:

  1. Support a large, diverse, and high-quality collection of environments
  2. Provide sufficient physical realism to support realistic interactions and manipulation of a wide range of household objects
  3. Offer as much photorealism as possible, while still maintaining enough rendering speed to support training complex embodied agent behaviors

At its core, SPEAR was implemented on top of the Unreal Engine, which is an industrial-strength open-source game engine. SPEAR environments are implemented as Unreal Engine assets, and SPEAR provides an OpenAI Gym interface to interact with environments via Python.

SPEAR currently supports four distinct embodied agents:

  1. OpenBot Agent – well-suited for sim-to-real experiments, it provides identical image observations to a real-world OpenBot, implements an identical control interface, and has been modeled with accurate geometry and physical parameters
  2. Fetch Agent – modeled using accurate geometry and physical parameters, Fetch Agent is able to interact with the environment via a physically realistic gripper
  3. LoCoBot Agent – modeled using accurate geometry and physical parameters, LoCoBot Agent is able to interact with the environment via a physically realistic gripper
  4. Camera Agent – which can be teleported anywhere within the environment to create images of the world from any angle

The agents return photorealistic robot-centric observations from camera sensors, odometry from wheel encoder states as well as joint encoder states. This is useful for validating kinematic models and predicting the robot’s operation.

For optimizing navigational algorithms, the agents can also return a sequence of waypoints representing the shortest path to a goal location, as well as GPS and compass observations that point directly to the goal. Agents can return pixel-perfect semantic segmentation and depth images, which is useful for correcting for inaccurate perception in downstream embodied tasks and gathering static datasets.

SPEAR currently supports two distinct tasks:

  • The Point-Goal Navigation Task randomly selects a goal position in the scene’s reachable space, computes a reward based on the agent’s distance to the goal, and triggers the end of an episode when the agent hits an obstacle or the goal.
  • The Freeform Task is an empty placeholder task that is useful for collecting static datasets.

SPEAR is available under an open-source MIT license, ready for customization on any hardware. For more details, visit the SPEAR GitHub page.

The post Intel Labs introduces open-source simulator for AI appeared first on The Robot Report.

5 robotics predictions for 2023

The past few years have seen many organizations implement tech-driven changes at a rapid pace. As society becomes more digital, embracing technology and effectively managing new processes is key to the success of almost every business.

With rapid workplace transformation evident across industries, whether that’s moving to hybrid working or adopting new technologies, what can we expect from 2023? Here are five predictions for the coming year.

1. Turnkey solutions will make automation more accessible than ever

In recent years we have witnessed the development of many different types of sophisticated technologies. Advances in robotics, machine learning and other technologies have increased the pace of this change tenfold. While these promise to change or revolutionize the business world, all technology companies suffer from the same problem – they can’t be good at everything.

In the world of robotics, this is no different. Creating a robotic system requires hardware development, software development, application development, sensors, and interfaces to name a few. That’s why 2023 will be the year of turnkey solutions.

Original Equipment Manufacturers (OEMs) – companies creating new applications and products around existing technologies – will lie at the heart of this. They are able to drive innovation by combining technologies to deliver complete solutions for the most common applications, such as welding and palletizing. The result? Automation will become more sophisticated yet more straightforward to use than ever before.

Enabled Robotics, an OEM based in Denmark, is a great example of how this works. Since 2016 the company has been working to combine two types of cutting-edge technology by mounting collaborative robot arms (cobots) onto autonomous mobile robots (AMRs). This hybrid technology is now operating in industry, warehouse management and production and bringing robotics to service applications and hospital intralogistics.

Ultimately, these out-of-the-box solutions make it easier for companies to integrate crucial technologies and there is no limit to the imaginative ways companies will find to bring robots alongside humans in the world of work.

2. Manufacturers will turn towards modular production

Traditional industrial robots remain important in some parts of manufacturing, but we are seeing a trend towards deploying more flexible models of production. This is largely down to the fact that traditional industrial robots are typically large and fixed and entail complex deployment.

In contrast, cobots can perform a similar range of activities to traditional industrial robots but are smaller, lighter and much easier to deploy. They are designed to work alongside humans so pose less risk to safety and are better suited to environments that require flexibility and adaptability. On top of this, they are most cost-effective for businesses looking to deploy automation – a key consideration as we move into 2023.

The cobot industry is projected to grow to USD 2.2 billion by 2026 (The Collaborative Robot Market 2022 Report, Interact Analysis). As cobots continue to change the way work is done in applications such as packing, palletizing, welding and assembly, in 2023 we will see even larger companies turning to lightweight cobots to increase modularity in their production. Robot weight and versatility will be key specifications for those looking for new automation solutions and we will see more reconfigurable robotic work cells than ever before.

3. Higher payload and longer reach cobots will change the landscape for some applications

As more companies move towards cobot automation, many will still want to handle heavy payloads. The good news is that we have recently seen the introduction of several higher payload, longer reach cobots. In 2023 these will continue to transform parts of the manufacturing industry, improving the working lives of many employees.

This year, Universal Robots presented a new cobot, the UR20, which is built for higher payloads, faster speeds, and superior motion control all within a lightweight, small footprint system. The 20 kg payload capacity will transform industries such as palletizing while its 1750 mm reach is eagerly anticipated for use in welding. Manufacturers looking for that extra flexibility will find the robot light enough to be unbolted and relocated or attached to a heavy base with wheels. This will create new possibilities for applications and will drive innovation across the board. The UR20 will be delivered to customers in 2023.

ifr graph

Annual installations of industrial robots worldwide. | Source: IFR

4. Despite global uncertainties, long-term increases in industrial robot installations will continue

The recent IFR World Robotics Report showed industrial robot installation reached an all-time high in 2021 increasing by 31% over the previous year. Overall, worldwide annual robot installations between 2015 and 2021 have more than doubled. Although growth in 2022 seems to be slower across the sector, this is largely down to global uncertainties triggered by the pandemic and scarcity of electronic components.

We expect the upward trend of cobot automation to resume in 2023. Why? Because businesses across the world are facing labor and skills shortages and, despite the day-to-day challenges facing industry right now, we are in the midst of transition towards industry 5.0 where working alongside robots will create more human-centric, sustainable and resilient businesses.

5. Customers will be found at the heart of product development

Although we talk extensively about robot collaboration in the workplace, human collaboration is what drives innovation.

Customers understand their own needs better than anyone else and, as the automation market has matured, are better placed than ever before to offer valuable input on their requirements. This means robotics companies will involve customers much more in product development. It is why Universal Robots has reorganized its product creation teams and is focusing heavily on understanding the problems customers are facing before designing solutions.

Co-development projects where robotics companies and customers work together in developing specific solutions are also bound to increase in 2023 and beyond. Ultimately these allow customers to directly influence the product they are buying, while at the same time delivering valuable feedback for the robotic companies – meaning they will be able to launch a product to the benefit of the whole market.

Now more than ever, businesses need to innovate constantly and remain adaptable in order to survive and expand. As we head into 2023, they will rely ever more on technology and innovation to break new ground with turnkey solutions at the heart – all of which make the year ahead an exciting time for automation.

About the Author

Anders Beck is the Vice President of Strategy and Innovation at Universal Robots, a leading developer of collaborative robot arms. Prior to his time at Universal Robots, he held a number of positions at the Teknologisk Institut in Denmark, including head of industrial robotics and automation.

The post 5 robotics predictions for 2023 appeared first on The Robot Report.

MIT researchers create implantable robotic ventilator

MIT ventilator

Ellen Roche with the soft, implantable ventilator designed by her and her team. | Source: MIT, M. Scott Brauer

Researchers at MIT have designed a soft, robotic implantable ventilator that can augment the diaphragm’s natural contractions. 

The implantable ventilator is made from two soft, balloon-like tubes that would be implanted to lie over the diaphragm. When inflated with an external pump, the tubes act as artificial muscles that push down the diaphragm and help the lungs expand. The tubes can be inflated to match the diaphragm’s natural rhythm. 

The diaphragm lies just below the ribcage. It pushes down to create a vacuum for the lungs to expand into so they can draw air in, and then relaxes to let air out. 

The tubes in the ventilator are similar to McKibben actuators, a kind of pneumatic device. The team attached the tubes to the ribcage at either side of the diaphragm, so that the device was laying across the muscle from front to back. Using a thin external airline, the team connected the tubes to a small pump and control system. 

This soft ventilator was designed by Ellen Roche, an associate professor of mechanical engineering and member of the Institute for Medical Engineering and Science at MIT and her colleagues. The research team created a proof-of-concept design for the ventilator. 

“This is a proof of concept of a new way to ventilate,” Roche told MIT News. “The biomechanics of this design are closer to normal breathing, versus ventilators that push air into the lungs, where you have a mask or tracheostomy. There’s a long road before this will be implanted in a human. But it’s exciting that we could show we could augment ventilation with something implantable.”

According to Roche, the key to maximizing the amount of work the implantable pump does is by giving the diaphragm an extra push downwards when it naturally contracts. This means the team didn’t have to try to mimic exactly how the diaphragm moves, just create a device that is capable of giving that push. 

implantable ventilator

The implantable ventilator is made from two tubes that lay across the diaphragm. | Source: MIT

Roche and her team tested the system on anesthetized pigs. After implanting the device, they monitored the pigs’ oxygen levels and used ultrasound imaging to observe diaphragm function. Generally, the team found that the ventilator increased the amount of air that the pigs’ lungs could draw in with each breath. The device worked best when the contractions of the diaphragm and the artificial muscles were working in sync, allowing the pigs’ lungs to bring in three times the amount of air they could without assistance. 

The team hopes that its device could help people struggling with chronic diaphragm dysfunctions, which can be caused by ALS, muscular dystrophy and other neuromuscular diseases, paralysis and damage to the phrenic nerve. 

The research team included Roche, a former graduate student at MIT Lucy Hu, Manisha Singh, Diego Quevedo Moreno, Jean Bonnemain of Lausanne University Hospital in Switzerland and Mossab Saeed and Nikolay Vasilyev of Boston Children’s Hospital. 

The post MIT researchers create implantable robotic ventilator appeared first on The Robot Report.