Build better robots by listening to customer backlash

In the wake of the closure of Apple’s autonomous car division (Project Titan) this week, one questions if Steve Jobs’ axiom still holds true. “Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do,” declared Jobs, who continued with an analogy: “I think Henry Ford once said, ‘If I’d asked customers what they wanted, they would have told me, ‘a faster horse!’” Titan joins a growing graveyard of autonomous innovations, which is filled with the tombstones of BaxterJiboKuri and many broken quadcopters. If anything holds true, not every founder is Steve Jobs or Henry Ford and listening to public backlash could be a bellwether for success.

Adam Jonas of Morgan Stanley announced on Jan. 9, 2019 from the Consumer Electronic Show (CES) floor, “It’s official. AVs are overhyped. Not that the safety, economic, and efficiency benefits of robotaxis aren’t valid and noble. They are. It’s the timing… the telemetry of adoption for L5 cars without safety drivers expected by many investors may be too aggressive by a decade… possibly decades.”

The timing sentiment is probably best echoed by the backlash by the inhabitants of Chandler, Arizona who have been protesting vocally, even resorting to violence, against Waymo’s self-driving trials on their streets. This rancor came to a head in August when a 69-year-old local pointed his pistol at the robocar (and its human safety driver).

In a profile of the Arizona beta trial, The New York Times interviewed some of the loudest advocates against Waymo in the Phoenix suburb. Erik and Elizabeth O’Polka expressed frustration with their elected leaders in turning their neighbors and their children into guinea pigs for artificial intelligence.

Elizabeth adamantly decried, “They didn’t ask us if we wanted to be part of their beta test.” Her husband strongly agreed: “They said they need real-world examples, but I don’t want to be their real-world mistake.” The couple has been warned several times by the Chandler police to stop attempting to run Waymo cars off the road. Elizabeth confessed to the Times, “that her husband ‘finds it entertaining to brake hard’ in front of the self-driving vans, and that she herself ‘may have forced them to pull over’ so she could yell at them to get out of their neighborhood.” The reporter revealed that the backlash tensions started to boil “when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.”

Rethink's Baxter robot was the subject of a user backlash because of design limitations.

The deliberate sabotaging by the O’Polkas could be indicative of the attitudes of millions of citizens who feel ignored by the speed of innovation. Deployments that run oblivious to this view, relying solely on the excitement of investors and insiders, ultimately face backlash when customers flock to competitors.

In the cobot world, the early battle between Rethink Robotics and Universal Robots (UR) is probably one of the most high-flying examples of tone-deaf invention by engineers. Rethink’s eventual demise was a classic case of form over function with a lot of hype sprinkled on top.

Rodney Brooks‘ collaborative robotics enterprise raised close to $150 million in its short decade-long existence. The startup rode the coattails of fame of its co-founder, who is often referred to as the godfather of robotics, before ever delivering a product.

Dedicated Rethink distributor, Dan O’Brien, recalled, “I’ve never seen a product get so much publicity. I fell in love with Rethink in 2010.” Its first product, Baxter, released in 2012 and promised to bring safety, productivity, and a little whimsy to the factory floor. The robot stood at around six feet tall with two bright colored red arms that were connected to an animated screen complete with friendly facial expressions.

At the same time, Rethink’s robots were not able to perform as advertised in industrial environments, leading to a backlash and slow adoption. The problem stemmed from Brooks’ insistence in licensing their actuation technology, “Series Elastic Actuators (SEAs),” from former employer MIT instead of embracing the leading actuator, Harmonic Drive, for its mobility. Users demanded greater exactness in their machines that competitors such as UR, a Harmonic customer, took the helm in delivering.

Universal Robots' cobot arms don't have the problems that led to a backlash against Rethink's robots

Universal Robots’ cobots perform better than those of the late Rethink Robotics.

The backlash to Baxter is best illustrated by the comments of Steve Leach, president of Numatic Engineering, an automation integrator. In 2010, Leach hoped that Rethink could be “the iPhone of the industrial automation world.”

However, “Baxter wasn’t accurate or smooth,” said Leach, who was dismayed after seeing the final product. “After customers watched the demo, they lost interest because Baxter was not able to meet their needs.”

“We signed on early, a month before Baxter was released, and thought the software and mechanics would be refined. But they were not,” sighed Leach. In the six years since Baxter’s disappointing launch Rethink did little to address the SEAs problem. Most of the 1,000 Baxters sold by Rethink were delivered to academia, not the commercial industry.

By contrast, Universal booked more 27,000 robots since its founding in 2005. Even Leach, who spent a year passionately trying to sell a single Baxter unit, switched to UR and sold his first one within a week. Leach elaborated, “From the ground up, UR’s firmware and hardware were specifically developed for industrial applications and met the expectations of those customers. That’s really where Rethink missed the mark.”

This garbage can robot seen at CES was designed to be cheap and avoid consumer backlash.

As machines permeate human streets, factories, offices, and homes, building a symbiotic relationship between intended operators and creators is even more critical. Too often, I meet entrepreneurs who demonstrate concepts with little input from potential buyers. This past January, the aisles of CES were littered with such items, but the one above was designed with a potential backlash in mind.

Simplehuman, the product development firm known for its elegantly designed housewares, unveiled a $200 aluminum robot trash can. This is part of a new line of Simplehuman’s own voice-activated products, potentially competing with Amazon Alexa. In the words of its founder, Frank Yang, “Sometimes, it’s just about pre-empting the users’ needs, and including features we think they would appreciate. If they don’t, we can always go back to the drawing board and tweak the product again.”

To understand the innovation ecosystem in the age of hackers join the next RobotLab series on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA – February 12th in New York City, seating is limited so RSVP today!

The post Build better robots by listening to customer backlash appeared first on The Robot Report.

How Monteris Medical navigated a surgical robotics recall


Monteris Medical Neuroblate

Monteris Medical NeuroBlate robot-assisted brain surgery system.

Editor’s Note: This article was originally published by our sister website Medical Design & Outsourcing.

Marty Emerson became CEO of Monteris Medical in July 2016. Within a month, the first report came in of a problem: The probe tip of the Plymouth, Minn.-based company’s NeuroBlate robot-assisted brain surgery device unintentionally heated up during the MRI-assisted procedure.

That discovery would eventually turn into a recall designated as Class I by U.S. Food and Drug Administration (FDA) – Emerson’s first in his roughly 30 years in medtech. Understanding and solving the problem would consume Emerson and dozens of Monteris employees over the next two years.

“Almost every emerging technology at some point or another in its maturation process has to go through one of those trials by fire, if you will, where you’re really getting into the core of your science and technology,” Emerson said.

Some regulatory experts said that although the company’s response to the problem wasn’t perfect, it appears to be out of the woods. In October 2018, Monteris won FDA clearance for a laser probe with fiberoptic-controlled cooling for NeuroBlate. The fiberoptic part replaced a metal thermocouple inside the laser probe, enabling Monteris to lift MR scan restrictions. All patient-contacting components are now non-metallic.

In late 2018, Monteris also announced that more than 2,000 patients have been treated with NeuroBlate since its release in 2013; the company also won reimbursement from Aetna and Anthem. Emerson is optimistic that the roughly $10 million a year company – which had seen annual revenue growth of 40% before 2018 – is set to grow again as it turns its focus to sales and marketing.

NeuroBlate uses a robot-guided laser to ablate brain tissue during MRI scans. Some brain surgeons find NeuroBlate a useful surgical option for certain epilepsy and brain cancer patients who don’t have many other alternatives, according to Emerson.

Monteris ticked off a lot of boxes for Emerson after he left the top spot at Galil Medical, the Arden Hills, Minn.–based interventional oncology cryoablation technology company he led until its 2016 acquisition by London-based BTG for up to $110 million.

A stint as a general manager for Boston Scientific in Singapore in the late 1990s, after joining Baxter in a finance role right out of college in 1985, was Emerson’s first foray into a management career that eventually led to the corner office at Minnetonka, Minn.-based American Medical Systems. (AMS’s male urology portfolio is now part of Boston Sci, and its women’s health portfolio is now Astora Women’s Health.)

Although his sales background and communication skills were what initially landed him at AMS, then-CEO Doug Kohrs told us, Emerson’s level-headed and numbers-oriented approach soon became apparent. Kohrs said he considered those unusual traits for a salesperson and eventually promoted Emerson to COO and groomed him for the top job.

“Marty took a very pragmatic approach to solving problems,” Kohrs recalled. “He wasn’t a sky-is-falling kind of guy. He just saw what was going on, and then he got the resources that he needed, and he fixed it.”

Frank Jaskulke, VP of intelligence at Minnesota’s Medical Alley Assn., described Emerson as among the most respected leaders in the state because of his work growing AMS, Galil and now Monteris.

He would need all of his skills after learning of the first unintended probe heating incident in August 2016.

“It became the No. 1 priority,” Emerson said. “We viewed this as an incredibly important initiative that had, at its core, a need to be intensely focused on the science and technology that supports our company.”

Company officials quickly determined the problem involved a coated metal thermocouple that helped measure temperature inside the probe. As Emerson explained it, the connector from the back of the probe to the system had sometimes moved too close to the bore of the MRI magnet, picking up energy that was transmitted down the probe and heating the tip.

The problem only occurred inside particular MRI systems running specific scan types, leading the Monteris team to test more than 20 permutations and combinations from companies including Philips, Siemens and GE.

In December 2016, as the company’s investigation progressed, another probe tip-heating case surfaced; two more incidents occurred shortly before Monteris alerted the FDA in September 2017. In one, a patient died of a brain bleed a few days after the procedure, although it wasn’t conclusive that the probe tip heating was responsible, according to the FDA.

Emerson said that Monteris came to FDA with a thorough understanding of the problem, data from testing the 20 MR equipment permutations, updated instructions for use designed to mitigate the issue and a product development plan to permanently resolve the problem.

Communication and transparency among the Monteris team, with the FDA and with physicians were front-of-mind for Emerson during this process, he told us, recalling a number of late nights when executives and regulatory experts jointly edited responses to the FDA. An accountant by training, he also tried to stay mindful of what he didn’t know.

“I’m not an FDA expert,” he explained. “I relied heavily on the scientists and the technologists and the engineers and the experts on my team to get us through this process.”

Did Monteris do enough?

Although Monteris appears to have done many things right and appears to have succeeded in eliminating the problem, according to regulatory experts, there are lessons to be learned for companies facing similar problems. Former FDA analyst Madris Tomes, now CEO of medtech safety software company Device Events, said she was especially impressed that out of the 342 adverse event reports she counted for the company since 2010, about half came from Monteris’ salespeople – a much better record than the industry as a whole.

“I’ve seen a lot of things handled much worse than this,” Tomes added.

Michael Drues, a Southern California-based regulatory consultant, questioned why more than a year elapsed between Monteris learning of the problem and alerting the FDA.

“Unfortunately, there is no regulation that requires this for a 510(k) yet – there is for PMAs – but a company does have an obligation, in my opinion, to let FDA know what is going on ASAP. This was a Class I recall, which has potential for serious injury and death.”

“There was never any suggestion from FDA that we didn’t move fast enough,” Emerson told us when asked about the time gap. “We were doing an immense amount of testing along the way.”

There were only two instances of probe tip overheating over the course of 12 months, he added. After Monteris issued updated instructions for use in early October 2017, the company received no reports of unintended heating for the year preceding FDA approval of its new technology, Emerson said.

As of press time, representatives for the FDA had not responded to a request for comment on the Monteris recall.

Monteris emphasizes thorough and complete adverse event reporting, Emerson said, adding that he strives to remember that the company puts its tools in physicians’ hands to help patients.

“The vast majority of the patients … are really well served by the technology that we’ve provided to those physicians,” Emerson said. “I can’t let an unfortunate outcome stop us.”

The post How Monteris Medical navigated a surgical robotics recall appeared first on The Robot Report.

Inside NVIDIA’s new robotics research lab


NVIDIA CEO Jensen Huang (left) and Senior Director of Robotics Research Dieter Fox at NVIDIA’s robotics lab.

The Robot Report named NVIDIA a must-watch robotics company in 2019 due to its new Jetson AGX Xavier Module that it hopes will become the go-to brain for next-generation robots. Now there’s even more reason to keep an eye on NVIDIA’s robotics moves: the Santa Clara, Calif.-based chipmaker just opened its first full-blown robotics research lab.

Located in Seattle just a short walk from the University of Washington, NVIDIA’s robotics lab is tasked with driving breakthrough research to enable next-generation collaborative robots that operate robustly and safely among people. NVIDIA’s robotics lab is led by Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering.

“All of this is working toward enabling the next generation of smart manipulators that can also operate in open-ended environments where not everything is designed specifically for them,” said Fox. “By pulling together recent advances in perception, control, learning and simulation, we can help the research community solve some of the greatest challenges in robotics.”

The 13,000-square-foot lab will be home to 50 roboticists, consisting of 20 NVIDIA researchers plus visiting faculty and interns from around the world. NVIDIA wants robots to be able to naturally perform tasks alongside people in real-world, unstructured environments. To do that, the robots need to be able to understand what a person wants to do and figure out how to help achieve a goal.

The idea for NVIDIA’s robotics lab came in the summer of 2017 in Hawaii. Fox and NVIDIA CEO Jensen Huang met at CVPR, an annual computer vision conference, and discussed the exciting areas and difficult problems ongoing in robotics.

“NVIDIA dedicates itself to solving the very difficult challenges that computing can solve. And robotics is unquestionably one of the final frontiers of artificial intelligence. It requires the convergence of so many types of technologies,” Huang told The Robot Report. “We wanted to dedicate ourselves to make a contribution to the field of robotics. Along the way it’s going to spin off all kinds of great computer science and AI knowledge. We really hope the technology that will be created will allow industries from healthcare to manufacturing to transportation and logistics to make a great advance.”

NVIDIA said there are about a dozen projects currently underway, and NVIDIA will open source its research papers. Fox said NVIDIA is primarily interested, early on at least, in sharing its software developments with the robotics community. “Some of the core techniques you see in the kitchen demo will be wrapped up into really robust components,” Fox said.

We attended the official opening of NVIDIA’s robotics research lab. Here’s a peek inside.

Mobile manipulator in the kitchen

NVIDIA robotics lab

NVIDIA’s mobile manipulator includes a Franka Emika Panda cobot on a Segway RMP 210 UGV. (Credit: NVIDIA)

The main test area inside NVIDIA’s robotics lab is a kitchen the company purchased from IKEA. A mobile manipulator, consisting of a Franka Emika Panda cobot arm on a Segway RMP 210 UGV, will try its hand at increasingly difficult tasks, ranging from from retrieving objects from cabinets to learning how to clean the dining table to helping a person cook a meal.

During the open house, the mobile manipulator consistently fetched objects and put them in a drawer, opening and closing the drawer with its gripper. Fox admitted this first task is somewhat easy. The robot uses deep learning to detect specific objects solely based on its own simulation and doesn’t require any manual data labeling. The robot uses the NVIDIA Jetson platform for navigation and performs real-time inference for processing and manipulation on NVIDIA TITAN GPUs. The deep learning-based perception system was trained using the cuDNN-accelerated PyTorch deep learning framework.

Fox also made it clear why NVIDIA chose to test a mobile manipulator in a kitchen. “The idea to choose the kitchen was not because we think the kitchen is going to be the killer app in the home,” said Fox. “It was really just a stand in for these other domains.” A kitchen is a structured environment, but Fox said it is easy to introduce new variables to the robot in the form of more complex tasks, such as dealing with unknown objects or assisting a person who is cooking a meal.”

Deep Object Pose Estimation

DOPE NVIDIA robotics lab

NVIDIA Deep Object Pose Estimation (DOPE) system. (Credit: NVIDIA)

NVIDIA introduced its Deep Object Pose Estimation (DOPE) system in October 2018 and it was on display in Seattle. With NVIDIA’s algorithm and a single image, a robot can infer the 3D pose of an object for the purpose of grasping and manipulation. DOPE was trained solely on synthetic data.

One of the key challenges of synthetic data is the ability to bridge the reality gap so that networks trained on synthetic data operate correctly with real-world data. NVIDIA said its one-shot deep neural network, albeit on a limited basis, has accomplished that. The system approaches its grasps in two steps. First, the deep neural network estimates belief maps of 2D keypoints of all the objects in the image coordinate system. Next, peaks from these belief maps are fed to a standard perspective-n-point (PnP) algorithm to estimate the 6-DoF pose of each object instance.

Read our interview about the DOPE system with Stan Birchfield, a Principal Research Scientist at NVIDIA, here.

Tactile sensing

NVIDIA had two demos showcasing tactile sensing, which is a missing element for commercialized robotic grippers. One demo featured a ReFlex TakkTile 2 gripper from RightHand Robotics, which recently raised $23 million for its piece-picking technology. The ReFlex TakkTile 2 is a ROS-compatible robotic gripper with three fingers. The gripper has three bending DOF and 1 coupled rotational DOFs. Sensing capabilities include normal pressure sensors, rotational proximal joint encoders, and fingertip IMUs.

The other demo, run by NVIDIA senior robotics researcher Karl Van Wyk, featured SynTouch tactile sensors retrofitted onto an Allegro robotic hand from South Korea-based Wonik Robotics and a KUKA LBR iiwa cobot. “It almost feels like a pet!” said Huang as he gently touched the robotic fingers, causing them to pull back. “It’s surprisingly therapeutic. Can I have one?”

Van Wyk said tactile sensors are starting to trickle out of research labs and into the real world. “There is a lot of hardening and integration that needs to happen to get them to hold up in the real world, but we’re making a lot of progress there. The world we live in is designed for us, not robots.”

The KUKA LBR iiwa wasn’t using any vision to sense its environment. “The robot can’t see that we’re around it, but we want it be constantly sensing and reacting to its environment. The arm has torque sensing in all of the joints, so it can feel that I’m pushing on it and react to that. It doesn’t need to see me to react to me.

“We have a 16-motor hand over with three primary fingers and an opposable thumb, so it’s like our hands. The reason you want a more complicated gripper like this is you want to eventually be able to manipulate objects in your hands like we do on an daily basis. It is very useful and makes solving physical tasks more efficient. The SynTouch sensors measure what’s going on when we’re touching and manipulating something. Keying off those sensors is important for control. If we can feel the object, we can re-adjust the grip and the finger location.”

Human-robot interaction

HRI NVIDIA robotics lab

Huang tests a control system that enables a robots to mimic human movements. (Credit: NVIDIA)

Another interesting demo was NVIDIA’s “Proprioception Robot,” which is the work of Dr. Madeline Gannon, a multidisciplinary designer nicknamed the “Robot Whisperer” who is inventing better ways to communicate with robots. Using a two-armed ABB YuMi and a Microsoft Kinect on the floor underneath the robot, the system would mimic the movements of the human in front of it.

“With YuMi, you don’t need a roboticist to program a robot. Using NVIDIA’s motion generated algorithms, we can have engaging experiences with lifelike robots.”

You might have heard of Gannon’s recent work at the World Economic Forum in September 2018. She installed 10 industrial robot arms in a row, linking them to a single through a central controller. Using depth sensors at the bases of the robots, they tracked and responded to the movements of people passing by.

“There are so many interesting things that we could spin off in our pursuit of a general AI robot,” said Huang. “For example, it’s very likely that in the near future you’ll have ‘exo-vehicles’ around you, whether it’s an exoskeleton or an exo-something that helps people who are disabled, or helps us be stronger than we are.”

The post Inside NVIDIA’s new robotics research lab appeared first on The Robot Report.

Foldable drone could aid search and rescue missions


foldable drone

This foldable drone can squeeze through gaps and then go back to its previous shape, all the while continuing to fly. (Credit: UZH)

Inspecting a damaged building after an earthquake or during a fire is exactly the kind of job that human rescuers would like drones to do for them. A flying robot could look for people trapped inside and guide the rescue team towards them. But the drone would often have to enter the building through a crack in a wall, a partially open window, or through bars – something the typical size of a drone does not allow.

To solve this problem, researchers from the Robotics and Perception Group at the University of Zurich and the Laboratory of Intelligent Systems at EPFL created a new kind of drone. Both groups are part of the National Centre of Competence in Research (NCCR) Robotics funded by the Swiss National Science Foundation. The researchers wrote a paper about the project called “The Foldable Drone: A Morphing Quadrotor that can Squeeze and Fly.”

Inspired by birds that fold their wings in mid-air to cross narrow passages, the new drone can squeeze itself to pass through gaps and then go back to its previous shape, all the while continuing to fly. And it can even hold and transport objects along the way.

Mobile arms can fold around the main frame

“Our solution is quite simple from a mechanical point of view, but it is very versatile and very autonomous, with onboard perception and control systems,” explains Davide Falanga, researcher at the University of Zurich and the paper’s first author. In comparison to other drones, this morphing drone can maneuver in tight spaces and guarantee a stable flight at all times.

The Zurich and Lausanne teams worked in collaboration and designed a quadrotor with four propellers that rotate independently, mounted on mobile arms that can fold around the main frame thanks to servo-motors. The ace in the hole is a control system that adapts in real time to any new position of the arms, adjusting the thrust of the propellers as the center of gravity shifts.

“The morphing drone can adopt different configurations according to what is needed in the field,” adds Stefano Mintchev, co-author and researcher at EPFL. The standard configuration is X-shaped, with the four arms stretched out and the propellers at the widest possible distance from each other. When faced with a narrow passage, the drone can switch to a “H” shape, with all arms lined up along one axis or to a “O” shape, with all arms folded as close as possible to the body. A “T” shape can be used to bring the onboard camera mounted on the central frame as close as possible to objects that the drone needs to inspect.

To guarantee stable flight at all times, the researchers exploit an optimal control strategy that adapts on the fly to the drone morphology. “We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation.

“The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.”

Foldable drone first step to fully autonomous rescue searches

In the future, the researchers hope to further improve the drone structure so that it can fold in all three dimensions. Most importantly, they want to develop algorithms that will make the drone truly autonomous, allowing it to look for passages in a real disaster scenario and automatically choose the best way to pass through them.

“The final goal is to give the drone a high-level instruction such as ‘enter that building, inspect every room and come back’ and let it figure out by itself how to do it,” says Falanga.

foldable drone

A close-up picture of the foldable drone. (1) Qualcomm Snapdragon Flight onboard computer, provided with a quad-core ARM processor, 2 GB of RAM, an IMU and two cameras. (2) Qualcomm Snapdragon Flight ESCs. (3) Arduino Nano microcontroller. (4) The servo motors used to fold the arms. (Credit: UZH)

Editor’s Note: This article was republished from the University of Zurich.

The post Foldable drone could aid search and rescue missions appeared first on The Robot Report.

Otto Omega self-driving forklift debuts at IMTS

Otto Motors, a division of Clearpath Robotics, introduced at IMTS its Otto Omega self-driving forklift. Otto Omega is designed to help those in materials handling reduce costs, increase throughput and improve safety in the warehouse. Otto Omega, according to Otto Motors co-founder and CEO Matt Rendall, can autonomously pick up and drop off skids, receive…

The post Otto Omega self-driving forklift debuts at IMTS appeared first on The Robot Report.

Bat-inspired Robat uses echolocation to map, navigate environment

The “Robat” is a fully autonomous, four-wheeled terrestrial robot with bat-like qualities that uses echolocation, also called bio sonar, to move through novel environments while mapping them based only on sound. It was developed at Tel Aviv University (TAU). Bats use echolocation to map novel environments, navigating them by emitting sound then extracting information from…

The post Bat-inspired Robat uses echolocation to map, navigate environment appeared first on The Robot Report.

Algorithm evaluates military robots ability to get up after falling

Scientists at the US Army Research Laboratory (ARL) and the Johns Hopkins University Applied Physics Laboratory (JHU) have developed software to ensure that if a robot falls, it can get itself back up. This means future military robots will be less reliant on their soldier handlers. Based on feedback from soldiers at an Army training…

The post Algorithm evaluates military robots ability to get up after falling appeared first on The Robot Report.

Stanford AI Camera Offers Faster, More Efficient Image Classification

The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future…

The post Stanford AI Camera Offers Faster, More Efficient Image Classification appeared first on The Robot Report.

Diffractive Deep Neural Network Identifies Objects at Speed of Light

Engineers have 3D printed a physical artificial neural network that can analyze large volumes of data and identify objects at the actual speed of light. Developed at the UCLA Samueli School of Engineering the “diffractive deep neural network” uses the light bouncing from the object itself to identify that object in as little time as…

The post Diffractive Deep Neural Network Identifies Objects at Speed of Light appeared first on The Robot Report.