Researchers back Tesla’s non-LiDAR approach to self-driving cars


 

If you haven’t heard, Tesla CEO Elon Musk is not a LiDAR fan. Most companies working on autonomous vehicles – including Ford, GM Cruise, Uber and Waymo – think LiDAR is an essential part of the sensor suite. But not Tesla. Its vehicles don’t have LiDAR and rely on radar, GPS, maps and other cameras and sensors.

“LiDAR is a fool’s errand,” Musk said at Tesla’s recent Autonomy Day. “Anyone relying on LiDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Musk added. “They’re gonna dump LiDAR, mark my words. That’s my prediction.”

While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.

Tesla’s Sr. Director of AI Andrej Karpathy outlined a nearly identical strategy during Autonomy Day.

“The common belief is that you couldn’t make self-driving cars without LiDARs,” said Kilian Weinberger, associate professor of computer science at Cornell and senior author of the paper Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. “We’ve shown, at least in principle, that it’s possible.”

LiDAR uses lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras rely on two perspectives to establish depth. But critics say their accuracy in object detection is too low. However, the Cornell researchers are saying the date they captured from stereo cameras was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras’ data was being analyzed, they say.

“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger says. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”

Cornell researchers compare AVOD with LiDAR, pseudo-LiDAR, and frontal-view (stereo). Ground- truth boxes are in red, predicted boxes in green; the observer in the pseudo-LiDAR plots (bottom row) is on the very left side looking to the right. The frontal-view approach (right) even miscalculates the depths of nearby objects and misses far-away objects entirely.

For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks (CNNs). The Cornell researchers say CNNs are very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. Again, when Cornell researchers switched the representation from a frontal perspective to a bird’s-eye view, the accuracy more than tripled.

“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information,” said co-author Bharath Hariharan, assistant professor of computer science. “Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”

“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

Waypoint Robotics provides mobile manipulation platform to MassTLC 5G Robotics Challenge

CAMBRIDGE, Mass. — To support winners of MassTLC 5G Robotics Challenge sponsored by Verizon and Ericsson, Waypoint Robotics Inc. recently delivered a mobile manipulation platform to the 5G Lab at the Alley here. The challenge winners will use the mobile manipulation system, which includes Waypoint’s flagship Vector autonomous mobile industrial robot and its quick-swappable UR5 payload, to develop robotics solutions bolstered by 5G technology.

This first-of-its-kind challenge asks teams to create 5G-powered robotics technologies in three key areas: industrial automation, collaborative robotics (cobots), and warehouse automation. As part of the program, winners will be able to use the Vector mobile manipulation platform as needed. They will also have access to dedicated 5G networks at Verizon’s 5G laboratories in Cambridge and Waltham, Mass., as well as 5G training and mentorship from Verizon and Ericsson.

“We are excited to support the 5G Robotics Challenge winners who are working to accelerate robotics development with the advantages offered by 5G technology and mobile edge computing” said Jason Walker, CEO of Merrimack, N.H.-based Waypoint Robotics. “This is a great example of the thriving New England robotics community working together to push forward innovative technologies that will have real benefits for the workforce and the companies they work for.”

Waypoint Robotics is providing support to the MassTLC 5G Robotics Challenge teams.

Participants in the 5G Robotics Challenge, sponsored by Verizon and Ericsson, can use Waypoint Robotics’ platform. Source: MassTLC

After a strong response to the call for proposals, the winning teams were announced by the Massachusetts Technology Leadership Council (MassTLC) in February. They include university teams from Northeastern University and the University of Massachusetts, Lowell, as well as four start-ups: Ava Robotics, GreenSight Agronomics, RealBotics, and Southie Autonomy.

Winners of the 5G Challenge each received $30,000 in grant funding to create insights, develop new use cases, and conceive innovative products that will advance the robotics industry by leveraging the unique speed, bandwidth and latency benefits of Verizon’s 5G technology and Mobile Edge Compute.

The volume of ideas and creativity proposed during the submittal process underscores a thriving greater Boston robotics community, said MassTLC. Challenges like these with support from organizations like MassTLC, Verizon, and Ericsson help fuel this growth.

Waypoint Robotics said it will continue to contribute to the robotics community by offering advanced technology that is easy to use for both the industrial workforce and entrepreneurs alike who are putting real robots to work in the real world.