When it comes to autonomous vehicles, the fundamental question the auto industry is asking and government regulators and safety advocates are now mulling is this: “How safe is safe enough?”

But there’s a bit of a strategy battle going on among automakers and the wide range of approaches they’re taking to develop and introduce these new technologies.

Tesla is rolling out autonomous features to its cars incrementally when the company decides they’re road-ready. To do otherwise, Tesla CEO Elon Musk says, would result in more overall driving fatalities.

“It would be morally wrong to withhold functionalities that improve safety simply in order to avoid criticisms or for fear of being involved in lawsuits,” Musk says.

Then there’s the company Waymo, formerly the Google self-driving car project, which is abiding by a more conservative premise that self-driving cars shouldn’t be sold until human action isn’t required at all.



Other companies actively developing self-driving vehicles or technology fall somewhere in the middle of those two approaches.

They’re all trying to answer the basic questions that need to be worked out for the technology to develop: Is it okay to use human drivers as test subjects? How do humans react when a self-driving system fails? And would delaying an imperfect system that is, on average, better than a human driver but not fully tested result in more or fewer deaths on roads?

There’s no way we as a society would accept self-driving cars that cause the same number of fatalities as humans, says Gill Pratt, CEO of the Toyota Research Institute, referring to the roughly 35,000 traffic deaths in the U.S. in 2015. “Society tolerates a lot of human error,” Pratt said earlier this year. “But we expect machines to be much better than us.”

The fatal crash last May involving a Tesla Model S owner using “Autopilot” mode was a wake-up call for the industry and regulators. The National Highway Traffic Safety Administration’s (NHTSA) investigation into the incident said it did not find any safety defect at the time.

It was the first death involving a car with Tesla’s Autopilot engaged. Tesla contends neither the driver nor the car could distinguish or respond to a white tractor-trailer crossing the road up ahead, against a bright sky.

Tesla’s Autopilot and similar systems—including those by Mercedes-Benz, BMW, and Volvo—are considered early steps toward automation. But even these modest steps can be safety risks if drivers aren’t aware of their car’s capabilities and limitations.

Early levels of automation rely on drivers to take over control in an emergency. Some automakers, like Tesla, Nissan, and Audi, consider human drivers their backup-safety system. Their logic: If humans can be relied upon to take over, then the technology can be rolled out incrementally and doesn’t have to be perfect right away.

Waymo’s fleet of driverless cars
Street Smarts: Waymo’s fleet of driverless cars has been tested on more than 2 million miles of roads.

Clarity for Consumers

Consumer Reports believes automakers are sending a mixed message by rolling out these systems in a way that makes drivers falsely believe they can take their hands off the wheel despite warnings to do the opposite. We don’t think humans should be used as test subjects. Automakers need to do a better job at communicating system capabilities and limitations.

As Jake Fisher, director of auto testing at CR, explains, “It’s very easy to get distracted when you are no longer directly responsible for driving the car. And when the car controls both speed and steering, it’s unreasonable to assume the driver will be alert enough to take over again in a moment’s notice.”

Consumer Reports supports any new technology that advances the needs and interests of consumers, but at CR, we’re always going to make safety our priority.

Waymo is focused on creating self-driving cars without pedals or steering wheels—or human drivers as backup. Its cars are expected to have enough safety technology that a sensor failure won’t bring them to a grinding stop. The downside is it will take longer to perfect and bring to market.

NHTSA’s approach to regulating this area of innovation has been to stay out of the way as much as possible. Automakers say they welcome some basic federal ground rules so states don’t come up with their own laws.

NHTSA unveiled a voluntary set of 15 safety measures in September that it would like any company bringing a self-driving car to market to address, such as how and in what circumstances the vehicle drives itself, how it was tested, and how it was engineered to be safe.

The Future of Safety

So far, the industry reaction to the guidelines has been positive. And during her confirmation hearing, Transportation Secretary Elaine Chao hinted at the new administration’s approach, saying she sees the government’s role as “a catalyst for safe, efficient technologies, not as an impediment.”

Finally, there are differences of opinion about how much testing is needed. Philip Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University, says there’s so much uncertainty around the technology that you might need close to a billion miles of test-driving data to ensure safety on roads populated with both human and machine-driven cars. Koopman also says he worries the industry is seriously underestimating how hard it will be to build innate safety features into artificially intelligent cars. “There’s a possibility at least some companies are just going to put the technology out there and roll the dice,” Koopman says. “My fear is this will really happen, and it will be bad technology.” 



Editor's Note: This article also appeared in the April 2017 issue of Consumer Reports magazine.