Following a series of crashes, one of which was fatal, Tesla Motors, the automaker known for its high-performance electric vehicles and envelope-pushing technology, is now under intense scrutiny for the way it deployed and marketed its Autopilot driving-assist system.

The company's aggressive roll-out of self-driving technology—in what it calls a "beta-test"—is forcing safety agencies and automakers to reassess the basic relationship between human drivers and their increasingly sophisticated cars. Last week, the National Highway Traffic Safety Administration (NHTSA) sent a letter to Tesla requesting detailed information about Autopilot, including any design changes and updates to the system, as well as detailed logs of when the system has prompted drivers to take over steering.

The most serious of the Autopilot crashes happened in Florida on May 7. According to the accident report, 40-year-old Ohio resident Joshua Brown died in a collision near Williston, Fla., with a tractor trailer that was making a left turn in front of his Model S. Tesla later acknowledged that the car was in Autopilot mode at the time. On June 30, Tesla published a blog post about the accident, stating "neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied."

Autopilot comprises multiple systems (including Autosteer and Auto Lane Change) that use cameras, radar, ultrasonic sensors and data to, in Tesla's words "automatically steer down the highway, change lanes, and adjust speed in response to traffic." The company also claims the features "help the car avoid hazards and reduce the driver's workload."

The Florida crash has prompted investigations by NHTSA and the National Transportation Safety Board (NTSB). Meanwhile, The Wall Street Journal reported the Securities and Exchange Commission is investigating whether Tesla failed to tell investors about the crash in a timely fashion.

While the exact cause of the fatal accident is not yet known, the incident has caused safety advocates, including Consumer Reports, to question whether the name Autopilot, as well as the marketing hype of its roll-out, promoted a dangerously premature assumption that the Model S was capable of truly driving on its own. Tesla's own press release for the system announced "Your Autopilot has arrived" and promised to relieve drivers "of the most tedious and potentially dangerous aspects of road travel." But the release also states that the driver "is still responsible for, and ultimately in control of, the car."

Consumer Reports experts believe that these two messages—your vehicle can drive itself, but you may need to take over the controls at a moment's notice—create potential for driver confusion. It also increases the possibility that drivers using Autopilot may not be engaged enough to to react quickly to emergency situations. Many automakers are introducing this type of semi-autonomous technology into their vehicles at a rapid pace, but Tesla has been uniquely aggressive in its deployment. It is the only manufacturer that allows drivers to take their hands off the wheel for significant periods of time, and the fatal crash has brought the potential risks into sharp relief.

"By marketing their feature as ‘Autopilot,' Tesla gives consumers a false sense of security," says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. "In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we're deeply concerned that consumers are being sold a pile of promises about unproven technology. 'Autopilot' can't actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver's hands are on the wheel."

Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they're certain they are safe.

"Consumers should never be guinea pigs for vehicle safety 'beta' programs," she says. "At the same time, regulators urgently need to step up their oversight of cars with these active safety features. NHTSA should insist on expert, independent third-party testing and certification for these features, and issue mandatory safety standards to ensure that they operate safely."


 

Consumer Reports Calls for Tesla to Do the Following:

  • Disable Autosteer until it can be reprogrammed to require drivers to keep their hands on the steering wheel.
  • Stop referring to the system as "Autopilot" as it is misleading and potentially dangerous.

  • Issue clearer guidance to owners on how the system should be used and its limitations.

  • Test all safety-critical systems fully before public deployment; no more beta releases.
     

Consumer Reports contacted Tesla about these concerns, and the company sent this response via email:

"Tesla is constantly introducing enhancements, proven over millions of miles of internal testing, to ensure that drivers supported by Autopilot remain safer than those operating without assistance. We will continue to develop, validate, and release those enhancements as the technology grows. While we appreciate well-meaning advice from any individual or group, we make our decisions on the basis of real-world data, not speculation by media."

Tesla also defended the safety record of the system, writing that "130 million miles have been driven on Autopilot, with one confirmed fatality." The company underscored that its beta software development process includes "significant internal validation."

Consumer Reports has owned three Teslas (2013 Model S 85, 2014 Model S P85D, and 2016 Model X 90D) and we've seen first-hand how such beta software is transmitted wirelessly into the cars. When software in a desktop computer or handheld electronic device is labeled as "beta"—it typically means that functionality is not fully developed and is still being fine-tuned.

Tesla says that the system makes frequent checks to ensure that the driver's hands remain on the wheel, but our recent testing on a Model X in Autopilot mode on a long, straight road found that the system took more than three minutes after our tester's hands were removed from the wheel before the vehicle gave any warning.

Being early adopters, many Tesla owners may want to test the limits of cutting-edge features included in updates to their cars. And while some drivers may be skilled and understanding of such features, confidence in or over reliance on the technology can have potentially fatal consequences.

Regaining Control

Research shows that humans are notoriously bad at re-engaging with complex tasks after their attention has been allowed to wander. According to a 2015 NHTSA study (PDF), it took test subjects anywhere from three to 17 seconds to regain control of a semi-autonomous vehicle when alerted that the car was no longer under the computer's control. At 65 mph, that's between 100 feet and quarter-mile traveled by a vehicle effectively under no one's control.

This is what's known by researchers as the "Handoff Problem." Google, which has been working on its Self-Driving Car Project since 2009, described the Handoff Problem in a 2015 monthly report (PDF). "People trust technology very quickly once they see it works. As a result, it's difficult for them to dip in and out of the task of driving when they are encouraged to switch off and relax," said the report. "There's also the challenge of context—once you take back control, do you have enough understanding of what's going on around the vehicle to make the right decision?"

Autonomous vehicle operation also relies upon the technology working correctly. Speaking at the National Press Club shortly after the Tesla crash, Christopher A. Hart, the head of the NTSB, acknowledged that true driverless cars could significantly improve highway safety. But he also pointed out that decades of investigating transportation accidents had revealed deadly consequences when humans rely upon systems that don't always work properly.

"The theory of removing human error by removing the human assumes that the automation is working as designed," said Hart. "So the question is what if the automation fails. Will it fail in a way that it is safe? If it cannot be guaranteed to fail safe, will the operator be aware of the failure in a timely manner, and will the operator then be able to take over to avoid a crash?"

As with the Tesla Autopilot crash, when an autonomous system fails to see an obstacle that it was supposed to see, the human operator must be able to take the controls and react quickly, or the results can be fatal.

In his speech, Hart of the NTSB gave the example of a 2009 Washington, D.C., Metro train accident which killed the train operator and eight passengers. In that accident, a train became "electronically invisible" and the operator of a train behind it assumed the track was unoccupied and was not warned in enough time to avoid a crash.



Exercising Caution

Because of the lack of clarity when it comes to the responsibilities of the driver vis a vis the car, some automakers are avoiding partial automation for the time being, focusing instead on driver assistance technologies.

Ford Motor Co. offers driver assistance technology on many of its vehicles, and it has a research program dedicated to developing future cars that will take the driver completely out of the loop. But according to company spokesman Alan Hall, the automaker is currently avoiding limited self-driving systems that require human oversight. "We are first researching to ensure that drivers can regain control of the vehicle in a responsible and timely manner," he says. "The technology at that level still requires driver attention in certain circumstances."

Other automakers, including BMW and Volvo, have expressed the need for caution and more controlled testing of semi-autonomous systems that require driver oversight before they're released to the public.

Harald Krüger, BMW's chief executive, stated at a press conference last week that BMW would need "the next few years" to perfect the autonomous driving system it hopes to put on public roads by 2021. "Today, the technologies are not ready for serious production."

Regulators have had trouble keeping pace with the development and deployment of autonomous systems. NHTSA issued its first preliminary statement of policy back in 2013. The agency updated that policy statement earlier this year, and promised to announce best practices for the industry later this summer. But NHTSA has not said whether it will propose binding rules requiring safety standards for automakers who build these systems into their cars. Consumer Reports asked NHTSA about its regulatory plans for self-driving cars, the agency directed us to previous statements but didn't reply to specific questions.