Should you use ROS as an interface layer? 

A robotic painting system developed for a leading aerospace manufacturer. | Photo Credit: Aerobotix

When it comes to operating and controlling robots, there are a variety of options that engineers can consider. These include robotic simulation software, artificial intelligence (AI), and a host of other off-the-shelf software packages that have been designed for specific applications.

When clients present our robotics company, Aerobotix, with challenging problems, we often decide to use an open-source middleware option such as Robot Operating System (ROS). ROS has been built on a framework focused on automation, reliability and flexibility. The benefit of using an open-source framework is that it includes a large contributing community, which is continuously developing and improving.

Why my team chooses ROS

ROS provides a dynamic backbone for creating new systems with a whole host of sensor packages. This freedom is perfect for our company’s robotic systems, as we use sensors like motors, lasers, LiDARs and safety devices. We’ve been able to find manufacturers that have developed their own hardware drivers and interfaces to easily pair with ROS.

Pairing these drivers with our custom solutions is a complex process due to the dynamic framework on which ROS is built. Some of these solutions were developed in short timelines so we looked to the ROS community for support and contracted individuals skilled in ROS development. These contractors helped us achieve understanding in areas such as point cloud manipulation and automated navigation.

Related: Intrinsic acquires ROS maker Open Source Robotics Corp

Traditional robot set-up vs. ROS setup

The building blocks of robotics automation traditionally include: a human-machine interface (HMI), a programmable logic controller (PLC) and the robot itself. In this basic setup, the PLC acts as the main interface layer — or middleman — for the control system, and all communication goes through the PLC. If you have a request from the HMI or the robot, the PLC answers it. The main constraint with this setup is that you’re stuck with “simple bits and bytes” and more advanced problems can’t be solved.

Using ROS alongside a traditional setup introduces additional capabilities to these bits and bytes. These additions include advanced devices, such as LiDAR, which may be used to create your own vision system. For example, LiDARs create “point clouds” that can be used for navigation, part detection and even object recognition.

Case study: collaborative mobile robot for Air Force maintenance depots

Our company’s first application of ROS was while working as the robotics partner on what became an award-winning project — an adaptive radome diagnostic system (ARDS). This introduced the use of a collaborative mobile robot in U.S. Air Force maintenance depots.

This system uses sensors that transmit microwave signals to non-destructively evaluate (NDE) aircraft radomes and identify defects such as delamination or water ingress in the composite structure. We developed a system integrating a FANUC CRX-10iA collaborative robot, a LiDAR vision system and a custom automated guided vehicle (AGV). This robot scans the warehouse with the LiDAR, navigates to the part, orients normal to the part, creates an inspection path, and outputs a detailed part analysis.

As this was our first application of ROS, we went through a steep learning curve to better understand the various ROS components—services, nodes, publishers and topics. This experience was demystified by online documentation and vast community support.

Case study: robotic painting system for leading aerospace manufacturer

This client was looking towards the future and wanted a more dynamic solution than traditional robotics methods could achieve. The request was for an automated part detection system with a laundry list of features including a non-contact, non-robotic motion that detects and finds multiple aircraft components within a hazardous C1D1-rated paint booth to ±0.50-inch accuracy — all from a single click.

ROS is at the core of the vision system we developed. This system begins with a recorded point cloud containing the robots and the aircraft components. By associating 3D models – provided by the customer — with the point cloud, we were able to locate the parts in reference to the robot. This relationship grants us access to change robotic motion paths for the newly loaded parts in the paint booth, pushing the boundaries of what is possible.

ROS works for you

Every project has its own unique challenges, which means each must be assessed and solved using a customized solution. Delving into the ROS ecosystem has aided my team in expanding beyond traditional robotics and furthered our understanding of advanced sensor technology.

We would encourage any engineer to add ROS to their toolkit and start exploring its unique applications.

About the Author

Aaron Feick is a lead software engineer at Aerobotix, an innovative leader in robotic solutions for the aerospace and defense industries. Headquartered in Huntsville, Alabama, the company specializes in the creation of cutting-edge automated robotic solutions for high-value, high-precision components, aircraft and vehicles.

The post Should you use ROS as an interface layer?  appeared first on The Robot Report.


阅读原文