Augmenting SLAM with deep learning

Some elements of the Spatial AI real-time computation graph. Click image to enlarge. Credit: SLAMcore

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a robot’s location within it. SLAM is being gradually developed towards Spatial AI, the common sense spatial reasoning that will enable robots and other artificial devices to operate in general ways in their environments.

This will enable robots to not just localize and build geometric maps, but actually interact intelligently with scenes and objects.

Enabling semantic meaning

A key technology that is helping this progress is deep learning, which has enabled many recent breakthroughs in computer vision and other areas of AI. In the context of Spatial AI, deep learning has most obviously had a big impact on bringing semantic meaning to geometric maps of the world.

Convolutional neural networks (CNNs) trained to semantically segment images or volumes have been used in research systems to label geometric reconstructions in a dense, element-by-element manner. Networks like Mask-RCNN, which detect precise object instances in images, have been demonstrated in systems that reconstruct explicit maps of static or moving 3D objects.

Deep learning vs. estimation

In these approaches, the divide between deep learning methods for semantics and hand-designed estimation methods for geometrical estimation is clear. More remarkable, at least to those of us from an estimation background, has been the emergence of learning techniques that now offer promising solutions to geometrical estimation problems. Networks can be trained to predict robust frame-to-frame visual odometry; dense optical flow prediction; or depth prediction from a single image.

When compared to hand-designed methods for the same tasks, these methods are strong on robustness, since they will always make predictions that are similar to real scenarios present in their training data. But designed methods still often have advantages in flexibility in a range of unforeseen scenarios, and in final accuracy due to the use of precise iterative optimization.

The three levels of SLAM, according to SLAMcore. Credit: SLAMcore”

The role of modular design

It is clear that Spatial AI will make increasingly strong use of deep learning methods, but an excellent question is whether we will eventually deploy systems where a single deep network trained end to end implements the whole of Spatial AI.  While this is possible in principle, we believe that this is a very long-term path and that there is much more potential in the coming years to consider systems with modular combinations of designed and learned techniques.

There is an almost continuous sliding scale of possible ways to formulate such modular systems. The end-to-end learning approach is ‘pure’ in the sense that it makes minimum assumptions about the representation and computation that the system needs to complete its tasks. Deep learning is free to discover such representations as it sees fit. Every piece of design which goes into a module of the system or the ways in which modules are connected reduces that freedom. However, modular design can make the learning process tractable and flexible, and dramatically reduce the need for training data.

Building in the right assumptions

There are certain characteristics of the real world that Spatial AI systems must work in that seem so elementary that it is unnecessary to spend training capacity on learning them. These could include:

  • Basic geometry of 3D transformation as a camera sees the world from different views
  • Physics of how objects fall and interact
  • The simple fact that the natural world is made up of separable objects at all
  • Environments are made up of many objects in configurations with a typical range of variability over time which can be estimated and mapped.

By building these and other assumptions into modular estimation frameworks that still have significant deep learning capacity in the areas of both semantics and geometrical estimation, we believe that we can make rapid progress towards highly capable and adaptable Spatial AI systems. Modular systems have the further key advantage over purely learned methods that they can be inspected, debugged and controlled by their human users, which is key to the reliability and safety of products.

We still believe fundamentally in Spatial AI as a SLAM problem, and that a recognizable mapping capability will be the key to enabling robots and other intelligent devices to perform complicated, multi-stage tasks in their environments.

For those who want to read more about this area, please see my paper “FutureMapping: The Computational Structure of Spatial AI Systems.”

Andrew Davison, SLAMcore

About the Author

Professor Andrew Davison is a co-founder of SLAMcore, a London-based company that is on a mission to make spatial AI accessible to all. SLAMcore develops algorithms that help robots and drones understand where they are and what’s around them – in an affordable way.

Davison is Professor of Robot Vision at the Department of Computing, Imperial College London and leads Imperial’s Robot Vision Research Group has spent 20 years conducting pioneering research in visual SLAM, with a particular emphasis on methods that work in real-time with commodity cameras.

He has developed and collaborated on breakthrough SLAM systems including MonoSLAM and KinectFusion, and his research contributions have over 15,000 academic citations. He also has extensive experience of collaborating with industry on the application of SLAM methods to real products.

Kollmorgen to present advanced motion control for commercial robots at Robotics Summit & Expo

Kollmorgen will exhibit its newest motion-centric automation solutions for designers and manufacturers of commercial robots and intelligent systems at the Robotics Summit & Expo 2019. Visitors are invited to Booth 202 to see and participate in a variety of product exhibits and exciting live demos.

Demos and other exhibits have been designed to show how Kollmorgen’s next-generation technology helps robot designers and manufacturers increase efficiency, uptime, throughput, and machine life.

Demonstrations

The AKM2G Servo Motor delivers the best power and torque density on the market, offering OEMs a way to increase performance and speed while cutting power consumption and costs. Highly configurable, with six frame sizes with up to five stack lengths, and a variety of selectable options (such as feedback, mounting, and performance capabilities), the AKM2G can easily be dropped into existing designs.

Robotic Gearmotor Demo: Discover how Kollmorgen’s award-winning frameless motor solutions integrate seamlessly with strain wave gears, feedback devices, and servo drives to form a lightweight and compact robotic joint solution. Kollmorgen’s standard and custom frameless motor solutions enable smaller, lighter, and faster robots.

AGVs and Mobile Robots: Show attendees can learn about Kollmorgen’s flexible, scalable vehicle control solutions for material handling for smart factories and warehouses with AGVs and mobile robots.

Panel discussion

Kollmorgen's Tom Wood will speak at the Robotics Summit & Expo

Tom Wood, Kollmorgen

Tom Wood, frameless motor product specialist at Kollmorgen, will participate in a session at 3:00 p.m. on Wednesday, June 5, in the “Technology, Tools, and Platforms” track at the Robotics Summit & Expo. He will be part of a panel on “Motion Control and Robotics Opportunities,” which will discuss new and improved technologies. The panel will examine how these motion-control technologies are leading to new robotics capabilities, new applications, and entry into new markets.

Register now for the Robotics Summit & Expo, which will be at Boston’s Seaport World Trade Center on June 5-6.

About Kollmorgen

Since its founding in 1916, Kollmorgen’s innovative solutions have brought big ideas to life, kept the world safer, and improved peoples’ lives. Today, its world-class knowledge of motion systems and components, industry-leading quality, and deep expertise in linking and integrating standard and custom products continually delivers breakthrough motion solutions that are unmatched in performance, reliability, and ease of use. This gives machine builders around the world an irrefutable marketplace advantage and provides their customers with ultimate peace of mind.

For more information about Kollmorgen technologies, please visit www.kollmorgen.com or call 1-540-633-3545.

Stanford Doggo robot acrobatically traverses tough terrain

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics club’s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain, but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Nathan Kau, ’20, a mechanical engineering major and lead for Extreme Mobility. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

Whereas other similar robots can cost tens or hundreds of thousands of dollars and require customized parts, the Extreme Mobility students estimate the cost of Stanford Doggo at less than $3,000 — including manufacturing and shipping costs. Nearly all the components can be bought as-is online. The Stanford students said they hope the accessibility of these resources inspires a community of Stanford Doggo makers and researchers who develop innovative and meaningful spinoffs from their work.

Stanford Doggo can already walk, trot, dance, hop, jump, and perform the occasional backflip. The students are working on a larger version of their creation — which is currently about the size of a beagle — but they will take a short break to present Stanford Doggo at the International Conference on Robotics and Automation (ICRA) on May 21 in Montreal.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

A hop, a jump and a backflip

In order to make Stanford Doggo replicable, the students built it from scratch. This meant spending a lot of time researching easily attainable supplies and testing each part as they made it, without relying on simulations.

“It’s been about two years since we first had the idea to make a quadruped. We’ve definitely made several prototypes before we actually started working on this iteration of the dog,” said Natalie Ferrante, Class of 2019, a mechanical engineering co-terminal student and Extreme Mobility Team member. “It was very exciting the first time we got him to walk.”

Stanford Doggo’s first steps were admittedly toddling, but now the robot can maintain a consistent gait and desired trajectory, even as it encounters different terrains. It does this with the help of motors that sense external forces on the robot and determine how much force and torque each leg should apply in response. These motors recompute at 8,000 times a second and are essential to the robot’s signature dance: a bouncy boogie that hides the fact that it has no springs.

Instead, the motors act like a system of virtual springs, smoothly but perkily rebounding the robot into proper form whenever they sense it’s out of position.

Among the skills and tricks the team added to the robot’s repertoire, the students were exceptionally surprised at its jumping prowess. Running Stanford Doggo through its paces one (very) early morning in the lab, the team realized it was effortlessly popping up 2 feet in the air. By pushing the limits of the robot’s software, Stanford Doggo was able to jump 3, then 3½ feet off the ground.

“This was when we realized that the robot was, in some respects, higher performing than other quadruped robots used in research, even though it was really low cost,” recalled Kau.

Since then, the students have taught Stanford Doggo to do a backflip – but always on padding to allow for rapid trial and error experimentation.

Stanford Doggo robot acrobatically traverses tough terrain

Stanford students have developed Doggo, a relatively low-cost four-legged robot that can trot, jump and flip. (Image credit: Kurt Hickman)

What will Stanford Doggo do next?

If these students have it their way, the future of Stanford Doggo in the hands of the masses.

“We’re hoping to provide a baseline system that anyone could build,” said Patrick Slade, graduate student in aeronautics and astronautics and mentor for Extreme Mobility. “Say, for example, you wanted to work on search and rescue; you could outfit it with sensors and write code on top of ours that would let it climb rock piles or excavate through caves. Or maybe it’s picking up stuff with an arm or carrying a package.”

That’s not to say they aren’t continuing their own work. Extreme Mobility is collaborating with the Robotic Exploration Lab of Zachary Manchester, assistant professor of aeronautics and astronautics at Stanford, to test new control systems on a second Stanford Doggo. The team has also finished constructing a robot twice the size of Stanford Doggo that can carry about 6 kilograms of equipment. Its name is Stanford Woofer.

Note: This article is republished from the Stanford University News Service.

ADVANCED Motion Controls debuts FlexPro digital servo drives


The FE060-25-EM is the first servo drive of the new FlexPro digital drive family from ADVANCED Motion Controls (AMC). Designed with compact form and power density in mind, the micro-sized FE060-25-EM can outperform larger-sized digital servo drives and still be integrated into tight spaces.

At just 1.5 x 1 x 0.6 in. (38 x 25 x 16 mm) in size, the footprint of the drive is approximately the same as two standard postage stamps. In other words, four of these drives can fit on a standard business card. Even with its small size, the FE060-25-EM can supply brushed, brushless, stepper, and linear servo motors with up to 25 A continuous current and 50 A peak current.

AMC FE060-25-EM Servo Drive

AMC FE060-25-EM Servo Drive

Here are some of the features of the FE060-25-EM servo drive:

  • 10 to 55 Vdc supply voltage
  • Highest power density servo drive from AMC to date
  • EtherCAT Communication
  • Incremental encoder and BISS C-mode feedback
  • Torque, velocity, and position operating modes
  • Configuration and full loop tuning
    IMPACT architecture

IMPACT (Integrated Motion Platform And Control Technology) is the architecture that makes AMC’s FlexPro drives possible. The stacking of circuit boards with creative selection and placement of high-power components allows for much higher power density than previously produced servo drives.

A developer version is available for proof-of-concept and testing purposes – part number FD060-25-EM. It comes with an FE060-25-EM soldered to a larger board equipped with various connectors for simplified interfacing.

The small size of the FE060-25-EM makes well-suited for cobots, AGVs, lab and warehouse automation, military equipment, and any other integrated design.

Hank robot from Cambridge Consultants offers sensitive grip to industrial challenges

Robotics developers have taken a variety of approaches to try to equal human dexterity. Cambridge Consultants today unveiled Hank, a robot with flexible robotic fingers inspired by the human hand. Hank uses a pioneering sensory system embedded in its pneumatic fingers, providing a sophisticated sense of touch and slip. It is intended to emulate the human ability to hold and grip delicate objects using just the right amount of pressure.

Cambridge Consultants stated that Hank could have valuable applications in agriculture and warehouse automation, where the ability to pick small, irregular, and delicate items has been a “grand challenge” for those industries.

Picking under pressure

While warehouse automation has taken great strides in the past decade, today’s robots cannot emulate human dexterity at the point of picking diverse individual items from larger containers, said Cambridge Consultants. E‑commerce giants are under pressure to deliver more quickly and at a cheaper price, but still require human operators for tasks that can be both difficult and tedious.

“The logistics industry relies heavily on human labor to perform warehouse picking and packing and has to deal with issues of staff retention and shortages,” said Bruce Ackman, logistics commercial lead at Cambridge Consultants. “Automation of this part of the logistics chain lags behind the large-scale automation seen elsewhere.”

By giving a robot additional human-like senses, it can feel and orient its grip around an object, applying just enough force, while being able to adjust or abandon if the object slips. Other robots with articulated arms used in warehouse automation tend to require complex grasping algorithms, costly sensing devices, and vision sensors to accurately position the end effector (fingers) and grasp an object.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Hank uses sensors for a soft touch

Hank uses soft robotic fingers controlled by airflows that can flex the finger and apply force. The fingers are controlled individually in response to the touch sensors. This means that the end effector does not require millimeter-accurate positioning to grasp an object. Like human fingers, they close until they “feel” the object, said Cambridge Consultants.

With the ability to locate an object, adjust overall system position and then to grasp that object, Hank can apply increased force if a slip is detected and generate instant awareness of a mishandled pick if the object is dropped.

Cambridge Consultants claimed that Hank moves a step beyond legacy approaches to this challenge, which tend to rely on pinchers and suction appendages to grasp items, limiting the number and type of objects they can pick and pack.

“Hank’s world-leading sensory system is a game changer for the logistics industry, making actions such as robotic bin picking and end-to-end automated order fulfillment possible,” said Ackman. “Adding a sense of touch and slip, generated by a single, low-cost sensor, means that Hank’s fingers could bring new efficiencies to giant distribution centers.”

Molded from silicone, Hank’s fingers are hollow and its novel sensors are embedded during molding, with an air chamber running up the center. The finger surface is flexible, food-safe, and cleanable. As a low-cost consumable, the fingers can simply be replaced if they become damaged or worn.

With offices in Cambridge in the U.K.; Boston, Mass.; and Singapore, Cambridge Consultants develops breakthrough products, creates and licenses intellectual property, and provides business and technology consulting services for clients worldwide. It is part of Altran, a global leader in engineering and research and development services. For more than 35 years, Altran has provided design expertise in the automotive, aerospace, defense, industrial, and electronics sectors, among others.

The Robot Report May 2019 issue on mobile robotics

We hope you enjoy the latest edition of The Robot Report, a special print section dedicated to mobile robotics. This appeared in the May 2019 issue of Design World, our sister publication and flagship publication at WTWH Media. Here is a breakdown of the mobile robotics topics covered inside:

Robotics Summit 2019 to take a closer look at mobile robots
Mobile robot engineers and users can learn from technology and industry leaders at the Robotics Summit & Expo, which runs June 5-6 in Boston.

What Amazon’s acquisition of Canvas Technology means
Amazon’s acquisition demonstrates the importance of safe navigation for developers and users of supply chain automation.

Augmenting SLAM with deep learning
SLAM is being gradually developed towards Spatial AI, the common sense spatial reasoning that will enable robots and other devices to operate in general ways in their environments.

Mobile robot trends from Automate/ProMat
At Automate/ProMat 2019 in Chicago, robotics developers checked out the latest products for manufacturing and logistics. Here are some robotics trends we saw at the show.

Expert roundtable: mobile robotics challenges and opportunities
A3’s Jeff Burnstein chats with leading autonomous mobile robot providers about market growth, technical challenges, and opportunities.

Integrating AI with fleet management software advances AMR collaboration
Data from new sensors, in combination with AI and machine learning, is making autonomous mobile robots or AMRs more flexible and safer around humans.

How 5G will impact mobile robots
Leading robotics companies share their opinions about how 5G will impact autonomous mobile robots.

If you are interested in contributing content to an upcoming special issue of The Robot Report, please reach out to me at scrowe@wtwhmedia.com or Eugene Demaitre at edemaitre@wtwhmedia.com. If you are interested in sponsorship opportunities of upcoming special issues, please reach out to Courtney Seel at cseel@wtwhmedia.com.

Techmetics introduces robot fleet to U.S. hotels and hospitals

Fleets of autonomous mobile robots have been growing in warehouses and the service industry. Singapore-based Techmetics has entered the U.S. market with ambitions to supply multiple markets, which it already does overseas.

The company last month launched two new lines of autonomous mobile robots. The Techi Butler is designed to serve hotel guests or hospital patients by interacting with them via a touchscreen or smartphone. It can deliver packages, room-service orders, and linens and towels.

The Techi Cart is intended to serve back-of-house services such as laundry rooms, kitchens, and housekeeping departments.

“Techmetics serves 10 different applications, including manufacturing, casinos, and small and midsize businesses,” said Mathan Muthupillai, founder and CEO of Techmetics. “We’re starting with just two in the U.S. — hospitality and healthcare.”

Building a base

Muthupillai founded Techmetics in Singapore in 2012. “We spent the first three years on research and development,” he told The Robot Report. “By the end of 2014, we started sending out solutions.”

“The R&D team didn’t just start with product development,” recalled Muthupillai. “We started with finding clients first, identified their pain points and expectations, and got feedback on what they needed.”

“A lot of other companies make a robotic base, but then they have to build a payload solution,” he said. “We started with a good robot base that we found and added our body, software layer, and interfaces. We didn’t want to build autonomous navigation from scratch.”

“Now, we’re just getting components — lasers, sensors, motors — and building everything ourselves,” he explained. “The navigation and flow-management software are created in-house. We’ve created our own proprietary software.”

“We have a range of products, all of which use 2-D SLAM [simultaneous localization and mapping], autonomous navigation, and many safety sensors,” Muthupillai added. “They come with three lasers — two vertical and one horizontal for path planning. We’re working on a 3-D-based navigation solution.”

“Our robots are based on ROS [the Robot Operating System],” said Muthupillai. “We’ve created a unique solution that comes with third-party interfaces.”

Techmetics offers multiple robot models for different industries.

Source: Techmetics

Techmetics payloads vary

The payload capacity of Techmetics’ robots depends on the application and accessories and ranges from 250 to 550 lb. (120 to 250 kg).

“The payload and software are based on the behavior patterns in an industry,” said Muthupillai. “In manufacturing or warehousing, people are used to working around robots, but in the service sector, there are new people all the time. The robot must respond to them — they may stay in its path or try to stop it.”

“When we started this company, there were few mobile robots for the manufacturing industry. They looked industrial and had relatively few safety features because they weren’t near people,” he said. “We changed the form factor for hospitality to be good-looking and safer.”

“When we talk with hotels about the Butler robots, they needed something that could go to multiple rooms,” Muthupillai explained. “Usually, staffers take two to three items in a single trip, so if a robot went to only one room and then returned, that would be a waste of time. Our robots have three compartment levels based on this feedback.”

Elevators posed a challenge for the Techi Butler and Techi Cart — not just for interoperability, but also for human-machine interaction, he said.

“Again, people working with robots didn’t share elevators with robots, but in hospitals and hotels, the robot needs to complete its job alongside people,” Muthupillai said. “After three years, we’re still modifying or adding functionalities, and the robots can take an elevator or go across to different buildings.”

“We’re not currently focusing on the supply chain industry, but we will license and launch the base into the market so that third parties can create their own solutions,” he said.

Techmetics' Techi Cart transports linens

Techi Cart transports linens and towels in a hotel or hospital. Source: Techmetics

Differentiators for Techi Butler and Cart

“We provide 10 robot models for four industries — no single company is a competitor for all our markets,” said Muthupillai. “We have three key differentiators.”

“First, customers can engage one vendor for multiple needs, and all of our robots can interact with one another,” he said. “Second, we talk with our clients and are always open to customization — for example, about compartment size — that other’s can’t do.”

“Third, we work across industries and can share our advantages across them,” Muthupillai claimed. “Since we already work with the healthcare industry, we already comply with safety and other regulations.”

“In hospitals or hotels, it’s not just about delivering a product from one point to another,” he said. “We’re adding camera and voice-recognition capabilities. If a robot sees a person who’s lost, it can help them.”

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Distribution and expansion

Techmetics’ mobile robots are manufactured in Thailand. According to Muthupillai, 80% of its robots are deployed in hotels and hospitals, and 20% are in manufacturing. The company already has distributors in Australia, Taiwan, and Thailand, and it is leveraging existing international clients for its expansion.

“We have many corporate clients in Singapore,” Muthupillai said. “The Las Vegas Sands Singapore has deployed 10 robots, and their headquarters in Las Vegas is considering deploying our products.”

“Also, U.K.-based Yotel has two hotels in Singapore, and its London branch is also interested,” he added. “The Miami Yotel is already using our robots, and soon they will be in San Francisco.”

Techmetics has three models for customers to choose from. The first is outright purchase, and the second is a two- or three-year lease. “The third model is innovative — they can try the robots from three to six months or one year and then buy,” Muthupillai said.

Muthupillai said he has moved to Techmetics’ branch office in the U.S. to manage its expansion. “We’ll be doing direct marketing in California, and we’re in the process of identifying partners, especially on the East Coast.”

“Only the theme, colors, or logos changed. No special modifications were necessary for the U.S. market,” he said. “We followed safety regulations overseas, but they were tied to U.S. regulations.”

“We will target the retail industry with a robot concierge, probably by the end of this year,” said Muthupillai. “We will eventually offer all 10 models in the U.S.”

SwRI system tests GPS spoofing of autonomous vehicles


Southwest Research Institute has developed a cyber security system to test for vulnerabilities in automated vehicles and other technologies that use GPS receivers for positioning, navigation and timing.

“This is a legal way for us to improve the cyber resilience of autonomous vehicles by demonstrating a transmission of spoofed or manipulated GPS signals to allow for analysis of system responses,” said Victor Murray, head of SwRI’s Cyber Physical Systems Group in the Intelligent Systems Division.

GPS spoofing is a malicious attack that broadcasts incorrect signals to deceive GPS receivers, while GPS manipulation modifies a real GPS signal. GPS satellites orbiting the Earth pinpoint physical locations of GPS receivers embedded in everything from smartphones to ground vehicles and aircraft.

Illustration of a GPS spoofing attack. Credit: Simon Parkinson

SwRI designed the new tool to meet United States federal regulations. Testing for GPS vulnerabilities in a mobile environment had previously been difficult because federal law prohibits over-the-air re-transmission of GPS signals without prior authorization.

SwRI’s spoofing test system places a physical component on or in line with a vehicle’s GPS antenna and a ground station that remotely controls the GPS signal. The system receives the actual GPS signal from an on-vehicle antenna, processes it and inserts a spoofed signal, and then broadcasts the spoofed signal to the GPS receiver on the vehicle. This gives the spoofing system full control over a GPS receiver.

Related: Watch SwRI engineers trick object detection system

While testing the system on an automated vehicle on a test track, engineers were able to alter the vehicle’s course by 10 meters, effectively causing it to drive off the road. The vehicle could also be forced to turn early or late.

“Most automated vehicles will not rely solely on GPS because they use a combination of sensors such as lidar, camera machine vision, GPS and other tools,” Murray said. “However, GPS is a basis for positioning in a lot of systems, so it is important for manufacturers to have the ability to design technology to address vulnerabilities.”

SwRI develops automotive cybersecurity solutions on embedded systems and internet of things (IoT) technology featuring networks and sensors. Connected and autonomous vehicles are vulnerable to cyber threats because they broadcast and receive signals for navigation and positioning.

The new system was developed through SwRI’s internal research program. Future related research will explore the role of GPS spoofing in drones and aircraft.

Editor’s Note: This article was republished from SwRI’s website.

Researchers back Tesla’s non-LiDAR approach to self-driving cars


 

If you haven’t heard, Tesla CEO Elon Musk is not a LiDAR fan. Most companies working on autonomous vehicles – including Ford, GM Cruise, Uber and Waymo – think LiDAR is an essential part of the sensor suite. But not Tesla. Its vehicles don’t have LiDAR and rely on radar, GPS, maps and other cameras and sensors.

“LiDAR is a fool’s errand,” Musk said at Tesla’s recent Autonomy Day. “Anyone relying on LiDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Musk added. “They’re gonna dump LiDAR, mark my words. That’s my prediction.”

While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.

Tesla’s Sr. Director of AI Andrej Karpathy outlined a nearly identical strategy during Autonomy Day.

“The common belief is that you couldn’t make self-driving cars without LiDARs,” said Kilian Weinberger, associate professor of computer science at Cornell and senior author of the paper Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. “We’ve shown, at least in principle, that it’s possible.”

LiDAR uses lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras rely on two perspectives to establish depth. But critics say their accuracy in object detection is too low. However, the Cornell researchers are saying the date they captured from stereo cameras was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras’ data was being analyzed, they say.

“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger says. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”

Cornell researchers compare AVOD with LiDAR, pseudo-LiDAR, and frontal-view (stereo). Ground- truth boxes are in red, predicted boxes in green; the observer in the pseudo-LiDAR plots (bottom row) is on the very left side looking to the right. The frontal-view approach (right) even miscalculates the depths of nearby objects and misses far-away objects entirely.

For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks (CNNs). The Cornell researchers say CNNs are very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. Again, when Cornell researchers switched the representation from a frontal perspective to a bird’s-eye view, the accuracy more than tripled.

“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information,” said co-author Bharath Hariharan, assistant professor of computer science. “Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”

“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

Waypoint Robotics provides mobile manipulation platform to MassTLC 5G Robotics Challenge

CAMBRIDGE, Mass. — To support winners of MassTLC 5G Robotics Challenge sponsored by Verizon and Ericsson, Waypoint Robotics Inc. recently delivered a mobile manipulation platform to the 5G Lab at the Alley here. The challenge winners will use the mobile manipulation system, which includes Waypoint’s flagship Vector autonomous mobile industrial robot and its quick-swappable UR5 payload, to develop robotics solutions bolstered by 5G technology.

This first-of-its-kind challenge asks teams to create 5G-powered robotics technologies in three key areas: industrial automation, collaborative robotics (cobots), and warehouse automation. As part of the program, winners will be able to use the Vector mobile manipulation platform as needed. They will also have access to dedicated 5G networks at Verizon’s 5G laboratories in Cambridge and Waltham, Mass., as well as 5G training and mentorship from Verizon and Ericsson.

“We are excited to support the 5G Robotics Challenge winners who are working to accelerate robotics development with the advantages offered by 5G technology and mobile edge computing” said Jason Walker, CEO of Merrimack, N.H.-based Waypoint Robotics. “This is a great example of the thriving New England robotics community working together to push forward innovative technologies that will have real benefits for the workforce and the companies they work for.”

Waypoint Robotics is providing support to the MassTLC 5G Robotics Challenge teams.

Participants in the 5G Robotics Challenge, sponsored by Verizon and Ericsson, can use Waypoint Robotics’ platform. Source: MassTLC

After a strong response to the call for proposals, the winning teams were announced by the Massachusetts Technology Leadership Council (MassTLC) in February. They include university teams from Northeastern University and the University of Massachusetts, Lowell, as well as four start-ups: Ava Robotics, GreenSight Agronomics, RealBotics, and Southie Autonomy.

Winners of the 5G Challenge each received $30,000 in grant funding to create insights, develop new use cases, and conceive innovative products that will advance the robotics industry by leveraging the unique speed, bandwidth and latency benefits of Verizon’s 5G technology and Mobile Edge Compute.

The volume of ideas and creativity proposed during the submittal process underscores a thriving greater Boston robotics community, said MassTLC. Challenges like these with support from organizations like MassTLC, Verizon, and Ericsson help fuel this growth.

Waypoint Robotics said it will continue to contribute to the robotics community by offering advanced technology that is easy to use for both the industrial workforce and entrepreneurs alike who are putting real robots to work in the real world.