A car that is fully controlled by a computer doesn’t get drowsy or distracted. It doesn’t get drunk or impaired by other drugs. If it’s instructed not to go above the speed limit, it won’t. Human error, which is at least partly responsible for 94% of today’s highway crashes, can largely be eliminated if the human driver becomes just another passenger. And with the unacceptable carnage of more than 37,000 deaths in motor vehicle crashes in 2016 alone, we can use all the help we can get. There’s no question that the potential benefits of autonomous vehicles are nothing short of phenomenal.
Getting there, however, will not be as easy as many people think. We recently held a Board meeting to consider the crash in 2016 of a partially automated Tesla into a tractor‑trailer near Williston, Florida. The driver wasn’t paying attention to the road as he should’ve been, and the system allowed the driver to use its “Autopilot” feature in places where it wasn’t designed to operate. The automation system used torque on the steering wheel as a proxy for driver engagement and alerted the driver if too much time passed without detectable movement on the wheel, but the driver treated the alerts as nuisances, dutifully applying torque each time the alert sounded before taking his hands off the wheel again. Although the driver was ultimately responsible for the resulting crash in which he tragically lost his life, the automation allowed him to make unsafe choices.

Flash back to 1914. An airplane flies past reviewing stands full of spectators. The pilot holds his hands high in the air to demonstrate that the airplane is flying itself. The plane makes another pass, then another. According to aviation lore, by the third pass, the pilot, Lawrence Sperry, is walking on the wings. Sperry was showing off his entry in an international aviation safety exhibition: the world’s first primitive autopilot, the gyroscopic stabilizer. It allowed a plane to fly straight and level without pilot input for short periods at a time.
In the years since, aircraft automation has become much more sophisticated. In addition, planes now have systems that sense terrain, they use GPS to know where they are, and they employ a vehicle-to-vehicle technology called a traffic collision avoidance system to help them avoid other planes. Thanks, in large measure to these technologies, aviation has become much safer. Yet, in 2013, nearly 100 years after Sperry’s demonstration, Asiana Flight 214, with more than 300 people on board, approached San Francisco International Airport too low and too slow and crashed into a seawall, killing three passengers.

The Asiana crash demonstrated automation confusion: the pilot thought that the auto‑throttle was maintaining the speed he selected, but he had inadvertently and unknowingly caused the auto‑throttle to become inactive. It also demonstrated that, due to longstanding overreliance on the automation, the pilot’s manual flying skills had degraded so much that he was uneasy about landing the plane manually on a 2‑mile‑long runway (that’s a long runway!) on a beautiful, clear day.
We’ve investigated automation-related accidents in all modes of transportation. In fact, our investigators see accident after accident involving problems with the interface between the automation and the human operator; we also see far too often that humans are not reliable about passively monitoring automation. And in cases like the Asiana crash, we see that humans get rusty when they don’t use their skills.
The Williston crash showed error types that are not surprising with what’s called level 2 automation. The human driver was responsible for monitoring the environment, but the automation allowed him to shirk this responsibility. This result was foreseeable, given the unfortunate use of the moniker “Autopilot,” which may suggest to the ordinary driver that the car can fully control itself (as compared with pilots, who know that they must still be engaged even when their airplane is operating on autopilot). Thus, one lesson learned is that if the automation should only be usable in certain circumstances, it should be “geo-fenced” so that it will work only in those circumstances instead of depending on the driver to decide appropriately.
What can we expect as our cars move beyond level 2? The aviation experience has demonstrated that as automation increases, so do the challenges. As automation becomes more complicated, drivers are less likely to understand it, and as automation becomes more reliable, drivers will become more complacent, less skillful, and less vigilant to potential failures. As a result, if a failure occurs in a more complicated and reliable system, the likelihood increases that most drivers will not be able to recover successfully from the failure.
In the Asiana investigation, we found that the airline used the available automation as fully and as often as possible. After the crash, we recommended that the airline require more manual flying, both in training and in line operations—not because we’re against technology, but because we see what can happen when pilots lose their skills because they’re not using them.
Then there’s the question of removing the driver altogether. Airliners will have pilots for the foreseeable future because aviation experts have not yet developed a “graceful exit” regarding failure of the automation or what to do if it encounters unanticipated circumstances. Similarly, drivers will be in the picture until the industry develops a graceful exit for their automation failing or encountering unanticipated circumstances . . . and unanticipated circumstances are certainly abundant on our streets and highways.
In every one of our investigations, we study the human, the machine, and the environment. Even across modes, humans and their interactions with automation are a common denominator in an accident’s probable cause. For 50 years, we’ve been finding answers to help the transportation industry save lives, and when our recommendations are put into practice, the industry and the public generally realize safety benefits. We are excited about the opportunities to use the lessons we’ve learned over these many years to help the transportation industry move toward safer vehicles, regardless of who (or what) is operating them.
We’ve come a long way since Lawrence Sperry’s gyroscopic stabilizer, but as accidents like Asiana and Williston show, we’ve still got a way to go before automation can significantly reduce fatalities on our streets and highways. We look forward to continuing to work with vehicle manufacturers to help them develop safer and more reliable automated transportation.