Insect-inspired AI for autonomous robots

Small autonomous mobile robots, such as drones, rovers, and legged robots, promise to perform a wide range of tasks, from autonomously monitoring crops in greenhouses to last-kilometer delivery. These applications require robots to operate for extended periods while performing complex tasks, often in unknown, changing, and complicated environments.

Researchers develop algorithm to divvy up tasks for human-robot teams

As robots increasingly join people on the factory floor, in warehouses and elsewhere on the job, dividing up who will do which tasks grows in complexity and importance. People are better suited for some tasks, robots for others. And in some cases, it is advantageous to spend time teaching a robot to do a task now and reap the benefits later.

Aerial imaging technique improves ability to detect and track moving targets through thick foliage

In forests where the foliage is thick, it can be challenging to detect and track moving targets, such as people and animals, using the current technology for collecting aerial images and videos. Researchers have developed a drone-operated 1D camera array that uses airborne optical sectioning to detect and track moving people in a dense forest. This new technique can be a helpful addition to the technology used in search and rescue missions.

MIT researchers help robots navigate uncertain environments

MIT CSAIL

MIT researchers have developed a trajectory-planning system for autonomous robots in unpredictable environments. | Source: Jose-Luis Olivares, MIT based on figure courtesy of the researchers

Researchers at MIT have developed a technique that can guide an autonomous robot through unknown environmental conditions. The technique helps a robot avoid obstacles without knowing the size, shape or location of what it could encounter. 

The research team hopes that its findings could help autonomous robots explore remote exoplanets where the robot, and the researchers who programmed it, don’t know what it will encounter on the planet. 

“Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” co-lead author on the paper, Ashkan Jasour, said. 

MIT’s team couldn’t use typical trajectory planning methods that make assumptions about the vehicle, obstacles and environment. These methods are too simplistic for real-world settings. Instead, the team developed an algorithm that could determine the probability of observing different conditions or obstacles at different locations.

The algorithm determines the probability of these events based on a map or images the robot collects with its perception system. This approach formulates trajectory planning as a probabilistic optimization problem, a mathematical programming framework which lets the robot achieve planning objectives while avoiding obstacles. 

“Our challenge was how to reduce the size of the optimization and consider more practical constraints to make it work. Going from good theory to good application took a lot of effort,” Jasour said.

The researchers then used higher-order statistics of probability distributions of the uncertainties to convert the probabilistic optimization problem into a more straightforward deterministic optimization problem. This kind of problem could be solved efficiently with off-the-shelf solves. 

MIT’s team tested their technique with simulated navigation scenarios. In an underwater model where the algorithms needed to chart a course from an uncertain position, around obstacles and to a goal region. The system could safely reach the goal 99% of the time. Depending on how complex the environment is, the algorithm can plan a safe course in seconds or minutes. 

The next step for the team is to create more efficient processes that significantly reduces runtime. Co-authors on the paper include Jasour, former Computer Science and Artificial Intelligence Laboratory (CSAIL) research scientist who now works at NASA’s Jet Propulsion Lab, and Weiqiao Ham, a graduate student in the department of electrical engineering and computer science and member of CSAIL. Senior author on the paper was Brian Williams, a professor of aeronautics and astronautics and a member of CSAIL. 

The post MIT researchers help robots navigate uncertain environments appeared first on The Robot Report.

Efforts to deliver the first drone-based, mobile quantum network

Hacked bank and Twitter accounts, malicious power outages and attempts to tamper with medical records threaten the security of the nation's health, money, energy, society and infrastructure. Harnessing the laws of nature—namely quantum physics—a cutting-edge teleportation technology is taking cybersecurity to new, "unhackable" heights using miniscule particles of light, or "beams."

Intuition Robotics partners with NY State Office for the Aging

ElliQ

ElliQ is a robotic companion designed specifically to fit the needs of older adults. | Source: Intuition Robotics

Intuition Robotics has partnered with the New York State Office for the Aging (NYSOFA) to put ElliQ, a robot designed to help older adults gain independence, in the homes of more than 800 older adults. 

ElliQ is a robotic companion designed specifically for aging adults that suffer from loneliness or social isolation. Its technology combines psychology, behavioral sciences and advanced cognitive artificial intelligence capabilities to provide proactive care. 

“ElliQ was really initially designed to help with companionship and loneliness,” Grace Andruszkiewicz, the director of marketing at Intuition Robotics, said. “But along the way, a lot of other sort of helpful features have been built into the product. Lots of communication features to help people stay connected to their loved ones, and health and wellness, so they can achieve their goals.” 

According to Intuition, ElliQ can proactively suggest activities or start conversations. The more users interact with ElliQ, the more the robot is able to build context and inform follow-up conversations, resulting in a more natural relationship than with other robotic companions. 

“Throughout the day, she’ll sort of chime in, and this is really where one of the key differentiators is. Where a lot of other voice activated devices require the individual to prompt the device using a call word, ElliQ can start to understand when might be a good time to interact and engage with the individual,” Andruszkiewicz said. 

On average, users interact with ElliQ 20 times a day. ElliQ users meet with nurses on Intuition Robotics’ team that will help users set goals so the robot knows how to help each user. The robot can remind users to take health measurements, like their blood pressure, and medications. 

Intuition Robotics’ partnership with SilverSneakers, a fitness and wellness program geared towards seniors, means ElliQ is equipped with a library of fitness videos to help users stay active. 

The company’s partnership with NYSOFA comes just months after the robot made its commercial debut. Intuition Robotics doesn’t have a strict timeline for the program, but initially each county in the state will be able to opt into the program. To qualify for the program, users must be residents of the state of New York and speak English, as ElliQ doesn’t know any other languages yet. 

“Despite misconceptions and generalizations, older adults embrace new technology, especially when they see it is designed by older adults to meet their needs,” Greg Olsen, director of NYSOFA, said. “For those who experience some form of isolation and wish to age in place, ElliQ is a powerful complement to traditional forms of social interaction and support from professional or family caregivers.”

Intuition Robotics is continuing to develop ElliQ with feedback from users. The company is currently working on updating its caregiving facing application, to allow caregivers to more easily interact with their family members with ElliQ, according to Andruszkiewicz. 

The post Intuition Robotics partners with NY State Office for the Aging appeared first on The Robot Report.

DeepMind’s open-source version of MuJoCo available on GitHub

Shadow hand MuJoCo

The Shadow hand from Open AI was built in part using the MuJoCo physics engine. | Credit: OpenAI

DeepMind, an AI research lab and subsidiary of Alphabet, in October 2021 acquired the MuJoCo physics engine for robotics research and development. The plan was to open-source the simulator and maintain it as a free, open-source, community-driven project. According to DeepMind, the open sourcing is now complete, and the entire codebase is on GitHub.

MuJoCo, which stands for Multi-Joint Dynamics with Contact, is a physics engine that aims to facilitate R&D in robotics, biomechanics, graphics and animation, and other areas where fast and accurate simulation is needed. MuJoCo can be used to implement model-based computations such as control synthesis, state estimation, system identification, mechanism design, data analysis through inverse dynamics, and parallel sampling for machine learning applications. It can also be used as a more traditional simulator, including for gaming and interactive virtual environments.

DeepMind said the following are some of the features that make MuJoCo attractive for collaboration:

  • Full-featured simulator that can model complex mechanisms
  • Readable, performant, portable code
  • Easily extensible codebase
  • Detailed documentation: both user-facing and code comments
  • We hope that colleagues across academia and the OSS community benefit from this platform and contribute to the codebase, improving research for everyone.

Here is more from DeepMind:

“As a C library with no dynamic memory allocation, MuJoCo is very fast. Unfortunately, raw physics speed has historically been hindered by Python wrappers, which made batched, multi-threaded operations non-performant due to the presence of the Global Interpreter Lock (GIL) and non-compiled code. In our roadmap below, we address this issue going forward.

“For now, we’d like to share some benchmarking results for two common models. The results were obtained on a standard AMD Ryzen 9 5950X machine, running Windows 10.”

As for the near-term roadmap, DeepMind said it will unlock MuJoCo’s speed potential with batched, multi-threaded simulation, support larger scenes with improvements to internal memory management and introduce a new incremental compiler with better model composability. DeepMind also said it will build out support for better rendering via Unity integration and add native support for physics derivatives, both analytical and finite-differenced.

Before the acquisition, DeepMind used MuJoCo as a simulation platform for various projects, mostly via its dm_control Python stack. It highlighted a few robotics examples, which you can watch via the playlist below.

The post DeepMind’s open-source version of MuJoCo available on GitHub appeared first on The Robot Report.