Uber's fatal self-driving crash: Volvo XC90.

The death of a pedestrian in March 2018 after she was struck by a self-driving Uber test vehicle was the first traffic death of its kind. 

After a 19-month probe, investigators from the National Transportation Safety Board concluded that the ride-share company, developing its own self-driving technology, left significant gaps in its approach to safety.

“The lessons of this crash do not just apply to Uber,” said Robert Sumwalt, chairman of the NTSB, at a hearing the safety board convened Tuesday to present its findings. “Something went wrong, and something else might go wrong again, unless it’s prevented.” 

The investigation into the death of 49-year-old Elaine Herzberg, who was struck in Tempe, Ariz., offers some key lessons into how companies should proceed in developing and testing self-driving technology. Here are some of the big takeaways from the Tuesday hearing. 

Lack of Oversight Contributed to the Crash

At the hearing, the NTSB stressed that a lack of oversight by federal regulators was a key problem.

The board said that the National Highway Traffic Safety Administration should require companies engaged in self-driving testing on public roads to publish reports outlining their approach to safety. 

More on Ride-Sharing Safety

Those reports are currently voluntary, and only 16 companies have published the self-assessments. Many of the voluntary assessments read more like marketing materials than scientific reports.

“NHTSA’s mission first and foremost is to save lives,” said NTSB board member Jennifer Homendy. “In my opinion, they have put technology advancement before saving lives.” 

In a statement, NHTSA said it welcomed the NTSB’s recommendations and will review them as it continues to investigate the crash. 

“While the technology is rapidly developing, it’s important for the public to note that all vehicles on the road today require a fully attentive operator at all times,” NHTSA’s statement said.

States also need to step up their responsibility to protect public safety, the NTSB said. Uber and many other outfits test vehicles in Arizona because the state has encouraged it. But the state’s lack of oversight at the time of the crash and its inaction since the crash “demonstrate the state’s shortcomings” in making testing safe, the NTSB said. Arizona and all states should set up boards to review required testing applications and demand changes if needed to protect the public, the NTSB said.

Developers of self-driving cars should have to prove their test vehicles’ safety before using them on public roads, says William Wallace, manager of safety policy at Consumer Reports. There should be evaluations based on rigorous evidence, available to the public and validated by independent third parties, he said. 

“It’s the Wild West right now, and it puts the public at risk,” Wallace says. “If companies don’t put safety first, they’ll be risking people’s lives—not to mention their own viability—and they should be held accountable under the law for the consequences.”

NTSB investigation into Uber's fatal self-driving crash

Relying on Human Drivers to Back Up Automated Cars Is Problematic

The NTSB found that the backup driver in the Uber vehicle was distracted by a video streaming on her phone. She was glancing down instead of at the road for 5 seconds before looking up less than a second before impact, the investigation found. The NTSB concluded that was the main cause of the crash.

But the NTSB also said that Uber’s driver training was lacking and that the company didn’t enforce its own rules and use technology to monitor its drivers to make sure they were paying attention to the road. The NTSB said that Uber also failed to monitor its anti-cell phone policy or punish drivers who broke it.

NTSB board members in discussions Tuesday said that most every industry has some level of automation and that it often presents problems. They stressed that, repeatedly, humans tend to tune out when tasked with monitoring automated systems that work well most of the time. It’s called automation complacency. The NTSB is best-known for investigating aviation mishaps, but it also looks into significant highway, rail, and shipping incidents. Its investigators assign an official “probable cause” of each incident, and the board makes recommendations to federal agencies, local governments, and the industry. It doesn’t have the power to mandate changes, but its recommendations are influential. 

“This is a pivotal moment for road safety, and it should be a wake-up call for companies testing and developing self-driving cars,” says Jake Fisher, senior director of auto testing at Consumer Reports. “It’s critical for companies to install effective systems to verify driver engagement. Without effective driver monitoring, any human in the car is being set up to fail.”

The Volvo XC90 involved in Uber's fatal self-driving crash

Uber’s Software Wasn’t Ready for Prime Time

The NTSB pointed out Tuesday that Uber’s software was not programmed to recognize a pedestrian anywhere but in a designated crosswalk. The software failure wasn’t a rare “edge case,” as programmers refer to them, but a failure to allow for the common occurrence of a jaywalking pedestrian. 

The vehicle’s sensors registered Herzberg’s presence nearly 6 seconds before impact—plenty of time to react. But the Uber car’s computer couldn’t figure out what it was seeing. It cycled repeatedly among “vehicle,” “bicycle,” and “unknown,” the NTSB said. Each time, it started its calculations all over again, wrongly assuming each cycle that it was sensing a new stationary object. So it never got the message that there was a pedestrian moving into its path, the NTSB said Tuesday.

The modified Volvo’s forward collision warning and automatic emergency braking systems were also turned off. The NTSB said if those off-the-shelf safety systems had been enabled, the vehicle would have avoided the crash or at least slowed down to reduce the chance of a fatality. 

At the very least, software should be programmed to assume that nearby objects that look like people or bicycles could be moving, said David Zuby, executive vice president and chief research officer at the Insurance Institute for Highway Safety. 

“I hope the industry has learned they need better, more robust software before we test on public roads,” Zuby said. “This software wasn’t ready.” 

The Industry Has Already Absorbed Some Lessons and Changed

Since the crash, Uber has revised its safety policies, overhauled driver training, and rolled out a monitoring program. Those moves drew praise from the NTSB, which said companies that cooperate in investigations can make improvements to increase safety. 

Sam Abuelsamid, an analyst who follows automated vehicle development for Navigant Research, says some of the better-known tech companies, such as Waymo, Argo AI, Cruise, and Aptiv, are cautious in their testing. For example, Aurora, a self-driving-car developer founded by executives from Google and other tech companies, won’t test automated technology on public roads, he said. Its drivers operate cars manually and collect data for use on the company’s simulators, he said.

The Uber crash prompted most every company testing self-driving technology to reevaluate procedures and training, and they made adjustments, Abuelsamid said. The Uber crash also delayed target dates for deploying self-driving cars, he said. 

“There is also the recognition from almost everyone in the industry that this is a far more difficult problem than they thought just two years ago,” he said. “Reliable perception and prediction are extremely challenging, and so far no one has demonstrated that any of these systems are anywhere near as safe as human drivers overall.”