Researchers develop AV object detection system with 96% accuracy

A Waymo autonomous vehicle. | Source: Waymo

An international research team at the Incheon National University in South Korea has created an Internet-of-Things (IoT) enabled, real-time object detection system that can detect objects with 96% accuracy. 

The team of researchers created an end-to-end neural network that works with their IoT technology to detect objects with high accuracy in 2D and in 3D. The system is based on deep learning specialized for autonomous driving situations. 

“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” Professor Gwanggil Jeon, leader of the project, said. “We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.

The team fed RGB images and point cloud data as input to YOLOv3. The identification algorithm then outputs classification labels and bounding boxes and accompanying confidence scores. 

The researchers then tested the performance of their system with the Lyft dataset and found that YOLOv3 was able to accurately detect 2D and 3D objects more than 96% of the time. The team sees many potential uses for their technology, including for autonomous vehicles, autonomous parking, autonomous delivery and for autonomous mobile robots. 

“At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” Jeon said. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years.”

The team’s research was recently published in IEEE Transactions of Intelligent Transport SystemsAuthors on the paper include Jeon, Imran Ahmed, from Anglia Ruskin University’s School of Computing and. Information Sciences in Cambridge, and Abdellah Chehri, from the department of mathematics and computer science at the Royal Military College of Canada in Kingston, Canada. 

The post Researchers develop AV object detection system with 96% accuracy appeared first on The Robot Report.

Intel Labs introduces open-source simulator for AI

SPEAR creates photorealistic simulation environments that provide challenging workspaces for training robot behavior. | Credit: Intel

Intel Labs collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR). The result is a highly realistic, open-source simulation platform that accelerates the training and validation of embodied AI systems in indoor domains. The solution can be downloaded under an open-source MIT license.

Existing interactive simulators have limited content diversity, physical interactivity, and visual fidelity. This realistic simulation platform allows developers to train and validate embodied agents for growing tasks and domains.

The goal of SPEAR is to drive research and commercialization of household robotics through the simulation of human-robot interaction scenarios.

It took more than a year with a team of professional artists to construct a collection of high-quality, handcrafted, interactive environments. The SPEAR starter pack features more than 300 virtual indoor environments with more than 2,500 rooms and 17,000 objects that can be manipulated individually.

These interactive training environments use detailed geometry, photorealistic materials, realistic physics, and accurate lighting. New content packs targeting industrial and healthcare domains will be released soon.

The use of highly detailed simulation enables the development of more robust embodied AI systems. Roboticists can leverage simulated environments to train AI algorithms and optimize perception functions, manipulation, and spatial intelligence. The ultimate outcome is faster validation and a reduction in time-to-market.

In embodied AI, agents learn from physical variables. Capturing and collating these encounters can be time-consuming, labor-intensive, and risky. The interactive simulations provide an environment to train and evaluate robots before deploying them in the real world.

Overview of SPEAR

SPEAR is designed based on three main requirements:

  1. Support a large, diverse, and high-quality collection of environments
  2. Provide sufficient physical realism to support realistic interactions and manipulation of a wide range of household objects
  3. Offer as much photorealism as possible, while still maintaining enough rendering speed to support training complex embodied agent behaviors

At its core, SPEAR was implemented on top of the Unreal Engine, which is an industrial-strength open-source game engine. SPEAR environments are implemented as Unreal Engine assets, and SPEAR provides an OpenAI Gym interface to interact with environments via Python.

SPEAR currently supports four distinct embodied agents:

  1. OpenBot Agent – well-suited for sim-to-real experiments, it provides identical image observations to a real-world OpenBot, implements an identical control interface, and has been modeled with accurate geometry and physical parameters
  2. Fetch Agent – modeled using accurate geometry and physical parameters, Fetch Agent is able to interact with the environment via a physically realistic gripper
  3. LoCoBot Agent – modeled using accurate geometry and physical parameters, LoCoBot Agent is able to interact with the environment via a physically realistic gripper
  4. Camera Agent – which can be teleported anywhere within the environment to create images of the world from any angle

The agents return photorealistic robot-centric observations from camera sensors, odometry from wheel encoder states as well as joint encoder states. This is useful for validating kinematic models and predicting the robot’s operation.

For optimizing navigational algorithms, the agents can also return a sequence of waypoints representing the shortest path to a goal location, as well as GPS and compass observations that point directly to the goal. Agents can return pixel-perfect semantic segmentation and depth images, which is useful for correcting for inaccurate perception in downstream embodied tasks and gathering static datasets.

SPEAR currently supports two distinct tasks:

  • The Point-Goal Navigation Task randomly selects a goal position in the scene’s reachable space, computes a reward based on the agent’s distance to the goal, and triggers the end of an episode when the agent hits an obstacle or the goal.
  • The Freeform Task is an empty placeholder task that is useful for collecting static datasets.

SPEAR is available under an open-source MIT license, ready for customization on any hardware. For more details, visit the SPEAR GitHub page.

The post Intel Labs introduces open-source simulator for AI appeared first on The Robot Report.

5 robotics predictions for 2023

The past few years have seen many organizations implement tech-driven changes at a rapid pace. As society becomes more digital, embracing technology and effectively managing new processes is key to the success of almost every business.

With rapid workplace transformation evident across industries, whether that’s moving to hybrid working or adopting new technologies, what can we expect from 2023? Here are five predictions for the coming year.

1. Turnkey solutions will make automation more accessible than ever

In recent years we have witnessed the development of many different types of sophisticated technologies. Advances in robotics, machine learning and other technologies have increased the pace of this change tenfold. While these promise to change or revolutionize the business world, all technology companies suffer from the same problem – they can’t be good at everything.

In the world of robotics, this is no different. Creating a robotic system requires hardware development, software development, application development, sensors, and interfaces to name a few. That’s why 2023 will be the year of turnkey solutions.

Original Equipment Manufacturers (OEMs) – companies creating new applications and products around existing technologies – will lie at the heart of this. They are able to drive innovation by combining technologies to deliver complete solutions for the most common applications, such as welding and palletizing. The result? Automation will become more sophisticated yet more straightforward to use than ever before.

Enabled Robotics, an OEM based in Denmark, is a great example of how this works. Since 2016 the company has been working to combine two types of cutting-edge technology by mounting collaborative robot arms (cobots) onto autonomous mobile robots (AMRs). This hybrid technology is now operating in industry, warehouse management and production and bringing robotics to service applications and hospital intralogistics.

Ultimately, these out-of-the-box solutions make it easier for companies to integrate crucial technologies and there is no limit to the imaginative ways companies will find to bring robots alongside humans in the world of work.

2. Manufacturers will turn towards modular production

Traditional industrial robots remain important in some parts of manufacturing, but we are seeing a trend towards deploying more flexible models of production. This is largely down to the fact that traditional industrial robots are typically large and fixed and entail complex deployment.

In contrast, cobots can perform a similar range of activities to traditional industrial robots but are smaller, lighter and much easier to deploy. They are designed to work alongside humans so pose less risk to safety and are better suited to environments that require flexibility and adaptability. On top of this, they are most cost-effective for businesses looking to deploy automation – a key consideration as we move into 2023.

The cobot industry is projected to grow to USD 2.2 billion by 2026 (The Collaborative Robot Market 2022 Report, Interact Analysis). As cobots continue to change the way work is done in applications such as packing, palletizing, welding and assembly, in 2023 we will see even larger companies turning to lightweight cobots to increase modularity in their production. Robot weight and versatility will be key specifications for those looking for new automation solutions and we will see more reconfigurable robotic work cells than ever before.

3. Higher payload and longer reach cobots will change the landscape for some applications

As more companies move towards cobot automation, many will still want to handle heavy payloads. The good news is that we have recently seen the introduction of several higher payload, longer reach cobots. In 2023 these will continue to transform parts of the manufacturing industry, improving the working lives of many employees.

This year, Universal Robots presented a new cobot, the UR20, which is built for higher payloads, faster speeds, and superior motion control all within a lightweight, small footprint system. The 20 kg payload capacity will transform industries such as palletizing while its 1750 mm reach is eagerly anticipated for use in welding. Manufacturers looking for that extra flexibility will find the robot light enough to be unbolted and relocated or attached to a heavy base with wheels. This will create new possibilities for applications and will drive innovation across the board. The UR20 will be delivered to customers in 2023.

ifr graph

Annual installations of industrial robots worldwide. | Source: IFR

4. Despite global uncertainties, long-term increases in industrial robot installations will continue

The recent IFR World Robotics Report showed industrial robot installation reached an all-time high in 2021 increasing by 31% over the previous year. Overall, worldwide annual robot installations between 2015 and 2021 have more than doubled. Although growth in 2022 seems to be slower across the sector, this is largely down to global uncertainties triggered by the pandemic and scarcity of electronic components.

We expect the upward trend of cobot automation to resume in 2023. Why? Because businesses across the world are facing labor and skills shortages and, despite the day-to-day challenges facing industry right now, we are in the midst of transition towards industry 5.0 where working alongside robots will create more human-centric, sustainable and resilient businesses.

5. Customers will be found at the heart of product development

Although we talk extensively about robot collaboration in the workplace, human collaboration is what drives innovation.

Customers understand their own needs better than anyone else and, as the automation market has matured, are better placed than ever before to offer valuable input on their requirements. This means robotics companies will involve customers much more in product development. It is why Universal Robots has reorganized its product creation teams and is focusing heavily on understanding the problems customers are facing before designing solutions.

Co-development projects where robotics companies and customers work together in developing specific solutions are also bound to increase in 2023 and beyond. Ultimately these allow customers to directly influence the product they are buying, while at the same time delivering valuable feedback for the robotic companies – meaning they will be able to launch a product to the benefit of the whole market.

Now more than ever, businesses need to innovate constantly and remain adaptable in order to survive and expand. As we head into 2023, they will rely ever more on technology and innovation to break new ground with turnkey solutions at the heart – all of which make the year ahead an exciting time for automation.

About the Author

Anders Beck is the Vice President of Strategy and Innovation at Universal Robots, a leading developer of collaborative robot arms. Prior to his time at Universal Robots, he held a number of positions at the Teknologisk Institut in Denmark, including head of industrial robotics and automation.

The post 5 robotics predictions for 2023 appeared first on The Robot Report.

MIT researchers create implantable robotic ventilator

MIT ventilator

Ellen Roche with the soft, implantable ventilator designed by her and her team. | Source: MIT, M. Scott Brauer

Researchers at MIT have designed a soft, robotic implantable ventilator that can augment the diaphragm’s natural contractions. 

The implantable ventilator is made from two soft, balloon-like tubes that would be implanted to lie over the diaphragm. When inflated with an external pump, the tubes act as artificial muscles that push down the diaphragm and help the lungs expand. The tubes can be inflated to match the diaphragm’s natural rhythm. 

The diaphragm lies just below the ribcage. It pushes down to create a vacuum for the lungs to expand into so they can draw air in, and then relaxes to let air out. 

The tubes in the ventilator are similar to McKibben actuators, a kind of pneumatic device. The team attached the tubes to the ribcage at either side of the diaphragm, so that the device was laying across the muscle from front to back. Using a thin external airline, the team connected the tubes to a small pump and control system. 

This soft ventilator was designed by Ellen Roche, an associate professor of mechanical engineering and member of the Institute for Medical Engineering and Science at MIT and her colleagues. The research team created a proof-of-concept design for the ventilator. 

“This is a proof of concept of a new way to ventilate,” Roche told MIT News. “The biomechanics of this design are closer to normal breathing, versus ventilators that push air into the lungs, where you have a mask or tracheostomy. There’s a long road before this will be implanted in a human. But it’s exciting that we could show we could augment ventilation with something implantable.”

According to Roche, the key to maximizing the amount of work the implantable pump does is by giving the diaphragm an extra push downwards when it naturally contracts. This means the team didn’t have to try to mimic exactly how the diaphragm moves, just create a device that is capable of giving that push. 

implantable ventilator

The implantable ventilator is made from two tubes that lay across the diaphragm. | Source: MIT

Roche and her team tested the system on anesthetized pigs. After implanting the device, they monitored the pigs’ oxygen levels and used ultrasound imaging to observe diaphragm function. Generally, the team found that the ventilator increased the amount of air that the pigs’ lungs could draw in with each breath. The device worked best when the contractions of the diaphragm and the artificial muscles were working in sync, allowing the pigs’ lungs to bring in three times the amount of air they could without assistance. 

The team hopes that its device could help people struggling with chronic diaphragm dysfunctions, which can be caused by ALS, muscular dystrophy and other neuromuscular diseases, paralysis and damage to the phrenic nerve. 

The research team included Roche, a former graduate student at MIT Lucy Hu, Manisha Singh, Diego Quevedo Moreno, Jean Bonnemain of Lausanne University Hospital in Switzerland and Mossab Saeed and Nikolay Vasilyev of Boston Children’s Hospital. 

The post MIT researchers create implantable robotic ventilator appeared first on The Robot Report.

Should you use ROS as an interface layer? 

A robotic painting system developed for a leading aerospace manufacturer. | Photo Credit: Aerobotix

When it comes to operating and controlling robots, there are a variety of options that engineers can consider. These include robotic simulation software, artificial intelligence (AI), and a host of other off-the-shelf software packages that have been designed for specific applications.

When clients present our robotics company, Aerobotix, with challenging problems, we often decide to use an open-source middleware option such as Robot Operating System (ROS). ROS has been built on a framework focused on automation, reliability and flexibility. The benefit of using an open-source framework is that it includes a large contributing community, which is continuously developing and improving.

Why my team chooses ROS

ROS provides a dynamic backbone for creating new systems with a whole host of sensor packages. This freedom is perfect for our company’s robotic systems, as we use sensors like motors, lasers, LiDARs and safety devices. We’ve been able to find manufacturers that have developed their own hardware drivers and interfaces to easily pair with ROS.

Pairing these drivers with our custom solutions is a complex process due to the dynamic framework on which ROS is built. Some of these solutions were developed in short timelines so we looked to the ROS community for support and contracted individuals skilled in ROS development. These contractors helped us achieve understanding in areas such as point cloud manipulation and automated navigation.

Related: Intrinsic acquires ROS maker Open Source Robotics Corp

Traditional robot set-up vs. ROS setup

The building blocks of robotics automation traditionally include: a human-machine interface (HMI), a programmable logic controller (PLC) and the robot itself. In this basic setup, the PLC acts as the main interface layer — or middleman — for the control system, and all communication goes through the PLC. If you have a request from the HMI or the robot, the PLC answers it. The main constraint with this setup is that you’re stuck with “simple bits and bytes” and more advanced problems can’t be solved.

Using ROS alongside a traditional setup introduces additional capabilities to these bits and bytes. These additions include advanced devices, such as LiDAR, which may be used to create your own vision system. For example, LiDARs create “point clouds” that can be used for navigation, part detection and even object recognition.

Case study: collaborative mobile robot for Air Force maintenance depots

Our company’s first application of ROS was while working as the robotics partner on what became an award-winning project — an adaptive radome diagnostic system (ARDS). This introduced the use of a collaborative mobile robot in U.S. Air Force maintenance depots.

This system uses sensors that transmit microwave signals to non-destructively evaluate (NDE) aircraft radomes and identify defects such as delamination or water ingress in the composite structure. We developed a system integrating a FANUC CRX-10iA collaborative robot, a LiDAR vision system and a custom automated guided vehicle (AGV). This robot scans the warehouse with the LiDAR, navigates to the part, orients normal to the part, creates an inspection path, and outputs a detailed part analysis.

As this was our first application of ROS, we went through a steep learning curve to better understand the various ROS components—services, nodes, publishers and topics. This experience was demystified by online documentation and vast community support.

Case study: robotic painting system for leading aerospace manufacturer

This client was looking towards the future and wanted a more dynamic solution than traditional robotics methods could achieve. The request was for an automated part detection system with a laundry list of features including a non-contact, non-robotic motion that detects and finds multiple aircraft components within a hazardous C1D1-rated paint booth to ±0.50-inch accuracy — all from a single click.

ROS is at the core of the vision system we developed. This system begins with a recorded point cloud containing the robots and the aircraft components. By associating 3D models – provided by the customer — with the point cloud, we were able to locate the parts in reference to the robot. This relationship grants us access to change robotic motion paths for the newly loaded parts in the paint booth, pushing the boundaries of what is possible.

ROS works for you

Every project has its own unique challenges, which means each must be assessed and solved using a customized solution. Delving into the ROS ecosystem has aided my team in expanding beyond traditional robotics and furthered our understanding of advanced sensor technology.

We would encourage any engineer to add ROS to their toolkit and start exploring its unique applications.

About the Author

Aaron Feick is a lead software engineer at Aerobotix, an innovative leader in robotic solutions for the aerospace and defense industries. Headquartered in Huntsville, Alabama, the company specializes in the creation of cutting-edge automated robotic solutions for high-value, high-precision components, aircraft and vehicles.

The post Should you use ROS as an interface layer?  appeared first on The Robot Report.

Australia establishes National Robotics Strategy Advisory Committee

Lyro Robotics

Australian-based robotics company Lyro Robotics creates an autonomous packing robot. | Source: Lyro Robotics

Ed Husic, Australia’s Minister for Industry and Science, appointed a National Robotics Strategy Advisory Committee. The committee will help to guide Australia’s strategy for emerging automation technologies. 

The committee will develop a national robotics strategy to help the country harness robotics and automation opportunities. The committee will examine robotics from every industry, from advanced manufacturing to agriculture. 

“We have brought together some of the nation’s leading robotics and technology thinkers and practitioners to guide the way we develop and use robotics,” Husic said. “Australia has a lot of the key elements that can help in the development of national robotics capabilities: our people, research and manufacturing skills. And while we’re recognized as possessing strength in field robotics, we can do better, across a wider range of activities.”

The National Robotics Strategy Advisory Committee is chaired by Professor Bronwyn Fox, the Chief Scientist of CSIRO, Australia’s national science agency. 

Other members of the committee include:

  • Catherine Ball, an associate professor at the Australian National University 
  • Andrew Dettmer, the National President of the Australian Manufacturing Workers’ Union 
  • Hugh Durrant-Whyte, the NSW chief scientist and engineer 
  • Sue Keay, the founder and chair of the Robotics Australia Group
  • Simon Lucey, the director of the Australian Institute of Machine Learning 
  • Julia Powels, the director of UWA Minderoo Tech & Policy Lab
  • Mike Zimmerman, a partner at Main Sequence Ventures

“Australian-made and maintained robotics and automation systems have the potential to boost local manufacturing, open up export opportunities and create safer and more productive work environments,” Husic said.

Husic also said that the National Robotics Strategy Advisory Committee will aim to develop robotic strength while also developing human skills so that Australians still have access to secure, well-paying jobs. Husic asked for the strategy to be finalized by March 2023. 

The post Australia establishes National Robotics Strategy Advisory Committee appeared first on The Robot Report.