Elon Musk, Self-Driving, and the Dangers of Wishful Thinking: How Tesla's Marketing Hype Got Ahead of Its Technology

CR's investigation reveals the way breathless CEO tweets, hyperbolic company blog posts, and confusing product names overshadowed a promising technology and led to a culture of dangerous driving, while regulators watched

Tesla's Photo: Toru Hanai/Bloomberg/Getty Images

On Oct. 19, 2016, electric car manufacturer Tesla made a promise to its customers. It was an audacious pledge, but Tesla had a track record of turning audacity into reality. The Model S sedan, with its long range and minimalist interior, proved that electric vehicles could be just as capable, luxurious, and fun to drive as gas-powered vehicles. And Autopilot, which automates some steering, braking, and acceleration functions to take some of the stress out of highway driving, wowed car buyers and showed that the automaker’s commitment to emerging technology went beyond electrification.

So it didn’t seem implausible when the company announced on its official blog that every Tesla built from that day forward would eventually become fully self-driving. The post said: “All Tesla Cars Being Produced Now Have Full Self-Driving Hardware.” Every new Model 3, Model S, and Model X would have new cameras and a new computer designed to take Teslas beyond Autopilot with a new feature the company called Full-Self Driving Capability, or FSD. 

It was a bold bet, but one that depended on a lot of other things working out. For instance, the company’s self-driving software wasn’t ready yet. Over time, the company said it would send out fresh software updates to improve FSD and Autopilot, which rely on similar hardware to operate. Drivers who paid up to $10,000 for Tesla’s FSD would “train” the software by using it on public roads. Eventually, the cars would learn from those experiences and evolve into fully self-driving vehicles. The automaker even posted a video showing a Tesla seemingly driving itself on city streets—a disclaimer on screen said the person in the driver’s seat was only there “for legal reasons.” Since then, Tesla still has not delivered a fully self-driving car. But CEO Elon Musk has frequently reiterated that the technology is on the way, tweeting in 2016 that the company would demonstrate a cross-country automated drive by “next year.” In 2019, he tweeted that “everyone with Tesla Full Self-Driving” would be able to make such a journey by 2020. “Buying a car in 2019 that can’t upgrade to full self-driving is like buying a horse instead of a car in 1919,” he wrote. None of these predictions have come to pass.

CR 2020 Tesla Model Y Testing
Consumer Reports' Tesla Model Y at our Auto Test Center.

Photo: John Powers/Consumer Reports Photo: John Powers/Consumer Reports

MORE ON TESLA

In many other industries, a failure to deliver like Tesla’s could be dismissed as marketing hype. But safety advocates, scientists, and elected officials say the automaker’s overinflated claims about Autopilot and FSD have created a culture of recklessness, endangering Tesla owners and other drivers. And the automaker’s expansion of its FSD beta program has turned even more owners into untrained test drivers on public roads.

“​​Tesla’s approach to developing self-driving cars is to use their customers as test engineers,” says Jake Fisher, senior director of Consumer Reports’ Auto Test Center. “But they don’t get training or any compensation for their work. They even have to pay to access the experimental software.”

FSD, which uses some of the same hardware and software as Autopilot, currently exists as an evolving collection of features that can assist the driver with parking, changing lanes on the highway, making turns, and coming to a complete halt at traffic lights and stop signs. Some owners have access to unfinished FSD beta software, which can control even more vehicle functions on public roads, even though Musk said that some versions of the software were “not great.” One version of FSD beta has already been subject to a product recall.

As public alarm grows around Autopilot-related crashes and FSD software updates, and as other automakers begin to promote similar systems, the government’s main automotive safety regulator, the National Highway Traffic Safety Administration, has started paying closer attention to Tesla, launching multiple overlapping investigations. In August, the agency started a defect investigation after at least 12 Autopilot-equipped cars crashed into stopped emergency vehicles, which could eventually lead to a recall, a fine, or both. Separately, the agency’s Special Crash Investigations team continues to look into at least 10 fatalities and 33 crashes since 2016 where Autopilot may be involved. Earlier this year, NHTSA demanded that all manufacturers submit data on any crashes involving driving automation systems, a move that significantly affects Tesla. And in a letter sent to the automaker last month, NHTSA questioned why Tesla performed safety-critical software updates to Autopilot without issuing a formal recall.

Tesla Model X
A Tesla Model X SUV.

Photo: Tesla Photo: Tesla

“One thing that is so tricky about Tesla is that they can point to places on their website or information that they will give to the end user, the consumer, that say, ‘Hey you have to keep your hands on the wheel, this is not a fully autonomous vehicle,’ but then the way they market it suggests that it is,” says Matthew Wansley, an assistant professor of law at the Cardozo School of Law at Yeshiva University in New York City, and the former general counsel of autonomous vehicle startup nuTonomy.

In the meantime, Tesla has frustrated regulators with its secrecy and attempts at message control. In a letter to Tesla sent Oct. 12 this year, NHTSA asked the automaker why it asked beta users of FSD to sign nondisclosure agreements that would prevent them from sharing negative information about their experiences using the software. And earlier this year, the California Department of Motor Vehicles put Tesla under review because of public statements the company made that might violate the state’s regulations on advertising autonomous vehicles.

Amid the crashes and criticism, Tesla continues to promote Autopilot and FSD. The company continues to grant more owners access to beta versions of FSD, which is now also available as a subscription that could cost up to $199 per month, depending on how a car is configured. And when things go wrong, Tesla and Musk have resisted accountability. Musk, in media interviews, also has said that autonomy-related deaths are an acceptable risk for the greater good.

How did Tesla’s pursuit of self-driving technology over the past decade end up with untrained drivers testing beta software on public roads? Why is Tesla continuing to move full-speed ahead with FSD and make public promises about self-driving in the face of mounting scrutiny? And why have regulators, until only very recently, seemed reluctant, or even unable, to act? 

To find out, we took a deep look into the evolution of Tesla’s driver assistance technology.

For this report, we reviewed hundreds of pages of court records, police reports, and regulatory documents. We talked to more than a dozen automotive industry analysts, software engineers, former auto safety regulators, and legal experts, as well as some of Tesla’s competitors.

We asked Tesla some direct questions through email, but the company hasn’t responded to any requests from our reporters since May 2019, and it no longer has a media relations department. NHTSA agreed to answer our questions for this article only through email.

Here’s what we’ve learned.

• Tesla is blurring the line between self-driving and driver assistance. For example, Tesla sells a product called Full Self-Driving that does not make a vehicle self-driving, and consumers are getting caught in the confusion. CR and others have found that Autopilot performs well at specific tasks, but it does a poor job of ensuring that drivers don’t abuse it.

• Federal regulators are ill-equipped to deal with a disruptive automaker that operates at the speed of Silicon Valley. Agencies, including NHTSA, don’t have much power to proactively address new technologies before they hit the market, so they’re left to investigate them after they’re released on public roads. For its part, although investigations are underway, NHTSA has not yet used its authority to require that Tesla address Autopilot or FSD risks.

• There is no proof that Autopilot and FSD improve road safety. Musk frequently touts the safety benefits of Tesla’s technology, but there are currently no definitive data supporting his claims.

• At some point, regulators may force Tesla to make changes. Tesla may have to issue a recall that could change how Autopilot or FSD operates, and/or face restrictions on how it markets the software.

As Tesla continues to roll out updated FSD software to more drivers, a showdown is brewing between the automaker and the agencies charged with regulating it. What consequences Tesla faces—if any—will affect safety for everyone on the road, even if they aren’t driving a Tesla.

When is ‘Full-Self Driving’ Not Self Driving?

There are no self-driving cars available for individuals to purchase today, at least not according to a commonly agreed-upon definition used by the auto industry and regulators. SAE International—an important engineering standards group formerly known as the Society of Automotive Engineers—says that full automation does not require a human to take over driving. Some self-driving cars might not be able to operate on all roads, and some might have limited abilities. But if a human ever has to take over from the car, the vehicle cannot be considered self-driving.

Many modern cars have some automated features. For example, lane centering assist can automate some steering functions to keep a car within its lane, and adaptive cruise control (ACC) can keep a car a set distance from traffic in front. Tesla combined these features to create Autopilot, but together they do not make a Tesla a self-driving car—a fact that you might not realize if you go to the Autopilot section of Tesla’s website. There, you can watch the video of a person sitting in the driver’s seat but not touching the wheel as the car navigates city streets and highways. The introductory text says, “The car is driving itself.” It also says, “The person in the driver’s seat is only there for legal reasons.” That video was first published in 2016, and has been cited in multiple lawsuits against the company, including a class-action suit that alleged Tesla misled consumers about Autopilot’s availability and capabilities. It was settled for more than $5.4 million in 2018.

In April 2019, the company hosted an almost 3-hour-long investor presentation focused on future self-driving technology. Tesla pitched the event, at its then-headquarters in Palo Alto, Calif., as Autonomy Day, and most of the senior executives on stage talked about the nitty-gritty of developing artificial intelligence (AI) or specific hardware choices. But when it came time for Musk to speak, the CEO promised Tesla owners that, pending regulatory approval, by 2020 their existing vehicles would become autonomous “robotaxis.” Owners could send them out to drive fare-paying passengers around—an opportunity that Musk said could earn Tesla owners up to $30,000 per year. “By the middle of next year, we’ll have over a million Tesla cars on the road with FSD hardware, feature complete, at a reliability level that we would consider that no one needs to pay attention,” he said, as he was flanked by a prototype Tesla Roadster. “Meaning you could go to sleep, from our standpoint, if you fast-forward a year, maybe a year and three months, but next year for sure, we will have over a million robotaxis on the road. The fleet wakes up with an over-the-air update. That’s all it takes.”

Tesla Motors Inc. will begin rolling out the first version of its highly anticipated "autopilot" features to owners of its all-electric Model S sedan 2015
Shortly after introducing Autopilot, Tesla CEO Elon Musk spoke at an event Wednesday, Oct. 14, 2015, at the company's Palo Alto, Calif., headquarters.

Photo: David Paul Morris/Bloomberg/Getty Images Photo: David Paul Morris/Bloomberg/Getty Images

Two years later, that functionality still hasn’t materialized. And Tesla has acknowledged to California state regulators that even the latest Full Self-Driving software the company released to some vehicles in late October doesn’t actually drive autonomously and that it still requires a driver to steer, brake, or accelerate as needed, according to documents made public by the legal advocacy group PlainSite. That puts it in the same league as technologies that are already out there, including new and upcoming systems such as Ford BlueCruise, GM Ultra Cruise, and Lucid DreamDrive, which all still require a driver’s constant attention.

“There’s a fundamental contradiction between Tesla’s legal position on Autopilot and Tesla’s marketing position on Autopilot,” Wansley says. “If you read the marketing, they call it ‘Full Self-Driving.’ If you read the letter that Tesla sent California regulators, they say, ‘This is not a self-driving or autonomous vehicle.’” Wansley suspects this is because such a definition exempts Tesla from the rigorous standards in California that prototype self-driving cars must meet in order to be tested on public roads.

The confusion between self-driving cars and autonomous features is a real problem, says Heidi King, who was the deputy and acting administrator of NHTSA between September 2017 and September 2019. “Exaggerated public claims continue to create safety risks, increase driver confusion about vehicle features, and support hype and over-investment in features that might not be road-ready for decades—if ever,” she tells CR.

And that confusion continues when drivers get behind the wheel. A car that automates some tasks well enough that drivers may occasionally stop paying attention may be more dangerous than a car without those automation features, says Kelly Funkhouser, head of connected and automated vehicle testing at Consumer Reports. “Driving can be boring,” she says. “If the car does the few things we were doing and does them well, drivers are going to seek out stimulation from elsewhere. In other words, if the system performs well, drivers might disengage and be less ready to take over if the system fails."

A Collision Course With Regulators

With its Silicon Valley roots, Tesla moves a lot faster than NHTSA. Even though NHTSA tells CR it is currently in the midst of formulating multiple rules regarding vehicle automation, it can take almost a decade for a new regulation to be issued, according to Paul Hemmersbaugh, a chief counsel at NHTSA from 2015 through 2017 who is now an attorney at the law firm DLA Piper. (Because of his prior work at NHTSA, Hemmersbaugh declined to comment on CR’s specific inquiries about investigations involving Tesla, but he agreed to talk about the general challenges of regulating emerging automotive technology.)

“For people in the companies that are from the tech world,” he tells CR, “those companies who are relatively new entrants who are not accustomed to this, they are questioning the need and perhaps the efficacy of these different regulatory requirements and processes.”

David Friedman is now Consumer Reports’ vice president of advocacy, but from May 2013 through September 2014, he was deputy administrator at NHTSA. He thinks the agency often doesn’t lean into its full authority. 

“One reason is that it is effectively operating on a shoestring when it comes to budget and staff due to lack of support from Congress for decades,” he says. “The agency also has internalized years of legal threats and delay tactics from car companies that make it hard to act quickly and efficiently. Combine that with political leadership whose priorities vary greatly from administration to administration, and it’s a recipe for progress to be halting at best.”

In 1966, president Lyndon Johnson signed the National Traffic and Motor Vehicle Safety Act, which gave the federal government a mandate to set rules for automotive safety for the first time in U.S. history. If those rules weren’t followed, automakers would have to recall any unsafe vehicles that already had been sold. By 1970, Congress had consolidated these authorities and others under a single agency—the National Highway Traffic Safety Administration—within the Department of Transportation.

US DOT where the NHTSA is housed
Department of Transportation headquarters in Washington, D.C.

Photo: Mark Van Scyoc/Shutterstock Photo: Mark Van Scyoc/Shutterstock

But automakers weren’t interested in being told what to do. Throughout the 1970s, they frequently clashed with NHTSA for implementing rules that they argued were impossible to carry out or were unreasonable, including rules that would’ve put airbags in cars decades before they became mandatory. Sometimes the automakers sued, and courts usually found in their favor. Those who have worked at NHTSA tell CR that the agency is still “gun shy” as a result. 

In 1981, the Reagan administration put NHTSA in its crosshairs in an attempt to do away with what it considered onerous regulations, slashing the agency’s budget and putting industry-friendly regulators in charge. In the 1990s and early 2000s, Congress directed the agency’s resources toward high-profile crashes and defects. Over the past decade, the agency’s focus has turned largely to organizing and enforcing major recalls, such as the massive Takata airbag campaign, and working on campaigns to prevent bad driver behaviors, such as drunken driving and failing to wear seat belts. 

Today, NHTSA has about 600 employees, and it hasn’t had a permanent administrator since 2017. (In October, President Joe Biden nominated current acting administrator Steven Cliff as permanent administrator.) NHTSA’s budget is about one-eighteenth the size of the Federal Aviation Administration’s and less than one-thirtieth of Tesla’s 2020 revenue. Tesla has more than 70,000 employees, and its 2020 revenue was over $30 billion.

A decal advertising Tesla Motors Inc.'s Autopilot feature sits on a window of their showroom at a dealership, on Oxford Street
A decal advertising Tesla's Autopilot feature.

Photo: Bloomberg/Getty Images Photo: Bloomberg/Getty Images

Perhaps most significantly, NHTSA doesn’t have the power to proactively stop automakers from releasing a product. Most automakers voluntarily comply with regulations, say the regulatory attorneys CR interviewed for this article, largely because they don’t want to face a product recall or a lawsuit.

“NHTSA does not ‘approve’ or test vehicles prior to their introduction,” an agency spokesperson told CR in an email. Instead, automakers are responsible for ensuring that their vehicles meet all NHTSA safety standards, and they must certify the compliance of their vehicles. But they have a broad ability to experiment outside the areas that those standards govern. For example, while federal safety standards sometimes reference a “steering wheel,” they do not explicitly require the steering control mechanism to be round. That ambiguity allows Tesla to exploit the regulatory gray area and sell a Model S with yoke-style steering, as it does now.

By comparison, in Europe, regulators must approve new vehicles before they go on sale, a process known as type approval. “If the EU says you do it, if you want to sell cars in Europe, you have to do it,” says Matthew Avery, director of insurance research at Thatcham Research, a road safety research agency funded by the insurance industry and based in the U.K. He’s also a board member of the European New Car Assessment Programme (Euro NCAP), an organization that performs safety evaluations on new cars sold in Europe. A lack of type approval is why European Teslas have limited automatic lane-change functionality on highways but U.S. Teslas do not.

Two Roads to Autonomy

Tesla is not the only automaker to have made unfulfilled promises about the future of self-driving cars. In 2016, executives from BMW, Intel, and Mobileye—a tech supplier that provides automation hardware and software for the majority of automakers, and would later become a subsidiary of Intel—joined each other at BMW’s Munich headquarters for a major announcement. Wearing matching white button-up shirts and slim trousers, the executives declared that together they would put a fully automated car “into series production” by 2021. The same year, a Business Insider analysis predicted that 10 million self-driving cars would be on the road by 2020. Other automakers and tech companies—including Ford, GM, and Waymo, a subsidiary of Google parent company Alphabet—poured money into autonomous vehicle research. The self-driving revolution seemed just around the corner.

Harald Krueger, CEO of German car maker BMW, speaks beside Amnon Shashua, co-founder, chairman and CTO of Mobileye NV and Brian Krzanich, CEO of Intel during a press conference in Munich.
Harald Krueger, CEO of German carmaker BMW, speaks beside Amnon Shashua, co-founder, chairman and CTO of Mobileye, and Brian Krzanich, CEO of Intel, during a press conference July 1, 2016, in Munich.

Photo: Christoph Stache/Getty Images Photo: Christoph Stache/Getty Images

None of those predictions has come to pass. It’s true that self-driving car prototypes from companies including Argo.ai, Cruise, Mobileye, and Waymo are being tested now on public roads. But these vehicles have been through or are currently part of test programs that are limited to driving in specific environments, and—most importantly—they’re not available for the public to purchase. These programs have not been without incident—Uber’s now-sold self-driving car program was responsible for the death of a pedestrian in 2018. Waymo has self-reported 18 instances of “contact” involving its prototypes. But they are subject to state and local restrictions, and many of them make public reports accessible when crashes occur. 

The vehicles are also extremely expensive, largely because they rely on multiple types of sensors, including radar and the especially pricey Lidar, which uses light to accurately pinpoint distances. Like Tesla, some of these companies use artificial intelligence (AI) software, which can interpret what a vehicle’s sensors see and hear, then learn to adapt to novel scenarios on roadways.

But Tesla swore off Lidar and radar, and uses only cameras to feed information to its AI software. Computer scientists say this software can learn over time that the pixels detected by the cameras can represent vehicles, stop signs, or bicycles, for example.

Opting for the camera-AI system also gives Tesla a financial advantage. In order to sell FSD as a mass-market option, Tesla couldn’t rely on pricey hardware like Lidar, says Raj Rajkumar, a professor in the department of electrical and computer engineering at Carnegie Mellon University in Pittsburgh. “The only thing they could add that their customers could afford was cameras,” he tells CR. Importantly, AI can improve its functionality as it encounters new situations. That means Tesla can sell an incomplete version of FSD today as a $10,000 option while promising that it will work better in the future.

So far, FSD hasn’t delivered. Earlier this year, experts who watched videos of Teslas equipped with FSD Beta 9 software navigating on city streets told Consumer Reports that the cars acted “like a drunk driver.” In July, Musk told users of an earlier release of FSD beta software to “be paranoid” and later tweeted that the version 9.2 of the software was “not great,” even though it was deployed on public roads.

In October, Musk pulled back the release of the 10.3 version of its FSD beta software, which the company already had made available for drivers to use on public roads. In a tweet, he said the updated software did not perform as well on left turns at traffic lights, so the company chose to roll cars back to an earlier version.

“While there is a lot of attention focused on the FSD beta, current drivers are likely to be very alert while using this admittedly buggy, evolving software,” says Fisher at CR. “The larger risk today is still with Autopilot. It is so capable that it could lead drivers to over-rely on it when they need to continue to pay attention to the road.”

The Human Toll of Automation

The first crash in the U.S. involving Autopilot happened seven months after the technology was released. Joshua Brown, a 40-year-old Navy veteran and Tesla enthusiast, was killed in May 2016 after his Model S crashed into the side of a tractor-trailer that was crossing a roadway near Williston, Fla.

There have now been at least 33 crashes where Autopilot may have been a factor, according to NHTSA. Lawsuits, police reports, and obituaries paint a fuller picture of the crashes: In March 2018, 38-year-old Walter Huang was killed when his Model X, with Autopilot in use, crashed into a barrier in Mountain View, Calif. A game was active on his phone before the crash. Earlier, Huang told family members that with Autopilot activated, his Model X earlier experienced issues near the same highway barrier that it later hit. In 2019, Jeremy Banner of Lake Worth, Fla., was killed on his way to work when his Tesla hit the side of a tractor-trailer in a crash that was reminiscent of the one that killed Brown. Later that year, Jenna Monet, 23, was killed on an Indiana highway when the 2019 Model 3 driven by her husband—who was seriously injured—struck the back of a stopped fire engine. In August 2020, David and Sheila Brown, married for 52 years, were killed when their Tesla struck multiple vehicles before it caught fire. Shawn Hudson of Winter Garden, Fla., was severely injured after his 2017 Model S struck a disabled vehicle in 2018. Justine Hsu lost multiple teeth and received facial injuries after her 2016 Model S hit a median in Arcadia, Calif., in July 2019.

Lawsuits and police reports show that others have been injured or killed while driving, walking, or standing on the same roads as Autopilot-equipped Teslas. NHTSA says its Special Crash Investigations team is looking into many of these crashes. But the agency hasn’t published a final report about any Autopilot-involved crash since 2018, and the agency tells CR that it won’t comment on pending investigations. 

“It sets off alarm bells to see 33 incidents and 10 deaths potentially tied to Autopilot, so the lack of aggressive action from Tesla and NHTSA has been baffling,” says CR’s Friedman. “The Takata airbag recall, the largest in history, started with an investigation into just three incidents. Consumers expect their products to be safe, and companies should be fixing safety hazards before—not after—people are injured or killed.”

The aftermath of the crash that killed Walter Huang in Mountain View, CA
The aftermath of the crash that killed Walter Huang.

Photo: NTSB/S. Engleman Photo: NTSB/S. Engleman

In the meantime, another government agency has taken on Tesla in a very public way. 

The National Transportation Safety Board, which investigates aviation, railway, and other significant transportation incidents, has frequently called for changes to how Tesla’s driver assistance technology is regulated and marketed. In September, NTSB chair Jennifer Homendy told the Wall Street Journal that the use of the term “full-self driving” is “misleading and irresponsible,” and she argued that Tesla “has clearly misled numerous people to misuse and abuse technology.” 

On Oct. 25, Homendy sent Musk a letter encouraging Tesla to take action on recommendations the agency issued in its investigation of the crash that killed Joshua Brown and has reiterated several times in the years since. “If you are serious about putting safety front and center in Tesla vehicle design, I invite you to complete action on the safety recommendations we issued to you four years ago,” she wrote. 

But the NTSB has no regulatory authority. For instance, when the agency told Tesla and five other automakers in 2017 that they should make changes to their driver assistance systems, then-NTSB chair Robert Sumwalt told Reuters, Tesla “ignored” the recommendation. While the NTSB can write detailed crash reports and make recommendations, any regulation or enforcement must come from NHTSA, which has not yet taken any action to curtail the use of Autopilot or FSD.

The Blame Game

In the immediate aftermath of the crash that killed Walter Huang, media outlets including Bloomberg published a statement from Tesla claiming that “the false impression that Autopilot is unsafe will cause harm to others on the road,” and that “the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road.” It also contrasted Huang’s death with the lives the company claims Autopilot has saved: “The reason that other families are not on TV is because their loved ones are still alive,” it said. The company has stuck to that script: “The 10% that do die with autonomy are still going to sue you. The 90% that are still living aren’t even going to know it’s the reason they’re alive,” Musk told journalist Kara Swisher at an event in September.

While drivers killed in some of these crashes may have been misusing the technology by playing video games on their personal phones, or otherwise not paying attention to the road, Tesla still may bear some responsibility, says Wansley, the law professor.

“A driver can be negligent, and also Tesla can be negligent for either their marketing or their failure to warn, or their failure to take other types of precautions that would keep the driver paying attention to the vehicle,” he tells CR. So far, there are multiple pending civil cases involving Autopilot-related fatalities and injuries in courts across the country, including from survivors of those whose family members were killed by Teslas with Autopilot engaged.

And when those crashes make headlines, it could delay the adoption of self-driving cars for consumers or force additional restrictions on their use, Avery says. “It’s not just Tesla that could be damaged. The whole concept of automation could be damaged.”

Ultimately, despite what Musk has said, it is still not clear whether driver assistance technologies such as Autopilot and FSD actually make roads safer. In 2017, NHTSA released a safety report based on Tesla-provided data. It found that the number of crashes with an airbag deployment fell after Tesla introduced the automated steering function of Autopilot. However, an independent analysis published in February 2019 found issues with what data were made available and flaws in how NHTSA interpreted it. Based on the new analysis, crashes with airbag deployment may actually have increased after the introduction of Autopilot.

Tesla Dealership, Texas USA 2020
A Tesla service center in Austin, Texas.

Photo: Getty Images Photo: Getty Images

Now, NHTSA is gathering more data. The agency issued a new standing general order in June that requires automakers to report to the agency any crash involving any form of driving automation technology, including Autopilot and FSD. King, the former NHTSA deputy administrator, and others say such data are important for regulators to have to test safety claims. “It’s possible that some drivers have become over-reliant on lane keeping assist, navigation systems, and emergency collision avoidance, but collision reports may not yet be updated to collect consistent data about whether these features contributed to a crash,” she says.

What's Next?

The data collection order is evidence that NHTSA may be trying to determine how to identify potential defects in driving automation systems and what safety standards to set for them.

“It’s a significant step, and just from an outside observation perspective, I think it may well signal the beginning of a more active regulatory agenda and process by NHTSA,” Hemmersbaugh, the agency’s former chief counsel, tells CR.

Lars Moravy, Tesla’s vice president of vehicle engineering, told investors on an October conference call that the company is complying with NHTSA’s order. “We were quick to respond to that, and one of the first and only companies capable of actually meeting the needs of that report,” he said. “We continue to send that information to them as required weekly and as incidents occur.”

It also appears that NHTSA is bringing in outside expertise. In October the agency hired Missy Cummings, director of the Humans and Autonomy Lab at Duke University in Durham, N.C., on a temporary basis as a senior advisor for safety. Cummings’ past research has found flaws with Tesla’s claims about Autopilot and FSD.

If regulators do take some action, Tesla owners may see changes to their cars as part of a product recall. For example, CR’s Funkhouser says that an effective driver monitoring system could prevent drivers from foreseeably misusing these systems. Many Tesla vehicles already have in-car cameras that the company uses to study footage of crashes or other anomalies, and to monitor FSD beta testers. “The easiest and simplest thing Tesla could do to immediately increase the safety for drivers and other road users is to use the driver monitoring camera in an effective way every time Autopilot is active,” she says. Other automakers, including BMW, and Subaru, also use some form of driver monitoring, but only Ford and GM say they use infrared cameras to determine whether drivers are paying attention while the steering, braking, and acceleration functions are active.

Transportation Committee during a hearing on "Highly Automated Vehicles: Federal Perspectives on the Deployment of Safety Technology,"  2019
Sen. Ed Markey, D-Mass., at a Senate Committee on Commerce, Science, and Transportation hearing on automated vehicles on Nov. 20, 2019, in Washington, D.C.

Photo: Sarah Silbiger/Getty Images Photo: Sarah Silbiger/Getty Images

Regulators also could force Tesla to change how it markets Autopilot or FSD. For example, Wansley says that he could envision a scenario in which Tesla cannot publicly use the term “Full Self-Driving” to describe software that doesn’t make a vehicle fully autonomous.

“There’s a lot of stuff that could be done with the marketing that is far short of taking this technology off the road,” Wansley says. That has already happened in Germany, where a court ordered Tesla to stop representing the system as providing full vehicle autonomy. In the U.S., Sens. Edward Markey, D-Mass., and Richard Blumenthal, D-Conn., called for the Federal Trade Commission to open an investigation into whether Tesla has engaged in deceptive marketing practices regarding the capabilities of Autopilot and FSD, in a letter they co-wrote to the agency’s head in August.

“Tesla often says that they don’t do marketing, but of course, the words ‘Autopilot’ and ‘Full Self-Driving’ are in a sense marketing these features by using phrases people are already familiar with,” Funkhouser says.

Taking a step backward from some of the company’s more outrageous promises would certainly be uncharacteristic for a company that has grown as rapidly as Tesla, especially if customers already have paid for technology that regulators now prohibit. But that’s a situation of Tesla’s own making, Avery says.

“Obviously if you’ve got the regulators saying, you can’t do that, there’s a bit of tension there with obligations that Tesla have got to their customer base,” he says. “To which the regulator would say, ‘That ain’t our problem, is it?’”

—Additional reporting by Benjamin Preston


Head shot photo of CRO Cars CIA editor Keith Barry

Keith Barry

Despite my love for quirky, old European sedans like the Renault Medallion, it's my passion to help others find a safe, reliable car that still puts a smile on their face—even if they're stuck in traffic. When I'm not behind the wheel or the keyboard, you can find me exploring a new city on foot or planning my next trip.