The Challenges of Partially Automated Driving

Correction: September 2016: Note the Erratum (error correction) note at the bottom of this column. It changes one of our examples but does not impact the point or the conclusions of the article.

I am pleased to say that my paper with Steve Casner and Ed Hutchins, The Challenges of Partially Automated Driving, has been published in the Communications of the ACM.

Casner, S. M., Hutchins, E. L., & Norman, D. (2016). The challenges of partially automated driving. Communications of the ACM, 59(5), 70-77. You can get a copy from the CACM website.

The creation of most-automated vehicles provides major challenges for us. For a long time I( have argued that the most dangerous part of the transition from manual to full automation is when the job is mostly complete -- which is precisely where we are today.  

The argument has been made many times. first by Lisanne Bainbridge in 1983 -- 33 years ago!   I made the argument in 1990.  Nothing has changed.

In this paper, we once again warn that partial automation lulls drivers into a false sense of security. Moreover, people are especially bad at maintaining vigilance and a sense of situation awareness for long periods when nothing is happening or when their assistance is not needed.  In the year 2014 (the latest year for which statistics are available), there was roughly one death for every 100 million vehicle miles. One per 100 million miles. Even so, there were over 33 thousand deaths in the United States plus roughly 1 million injuries. American drove almost 3 trillion miles.

In other words, the chance of a death when driving is tiny, but because we drive so much, the small probability adds up to a lot of deaths and injuries. Because the rate of incident is so low, most of the time our assistance will not be needed with the automation that will soon be available. Vehicle codes today insist that with automated vehicles, the driver must be ready to take over when things go wrong.  This is a very misguided requirement. At 60 mph (100 Kph), in one second a car travels roughly 90 feet (30 meters)  The evidence shows that it takes at least 10 seconds for a driver to notice the anomaly and figure out what is happening: that is 900 feet! (300 meters!).

Studies of airline pilots show that they can take minutes to figure out the difficulty. Airline pilots are highly trained. Fortunately, when an airplane is at cruising altitude it is 5 - 6 miles high (roughly 10Km). They have time.  Automobile drivers are badly trained, and they may have only a fraction of a second.

This article is yet another in a long series on this topic.  Here are my articles (but first, the classic by Bainbridge):

Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775-779.

Casner, S. M., Hutchins, E. L., & Norman, D. (2016). The challenges of partially automated driving. Communications of the ACM, 59(5), 70-77. http://mags.acm.org/communications/may_2016/?folio=70&CFID=778898787&CFTOKEN=63291163&pg=72#pg72

Norman, D. A. (1990). The "problem" of automation: Inappropriate feedback and interaction, not "over-automation". In D. E. Broadbent, A. Baddeley & J. T. Reason (Eds.), Human factors in hazardous situations(pp. 585-593). Oxford: Oxford University Press.

Norman, D. A. (2007). The Design of Future Things. New York: Basic Books.

Norman, D. (2015). Automatic Cars Or Distracted Drivers: We Need Automation Sooner, Not Later.   Retrieved September 26, 2015, from https://www.linkedin.com/pulse/automatic-cars-distracted-drivers-we-need-automation-sooner-norman?trk=prof-post

Norman, D. A. (2015). The human side of automation. In G. Meyer & S. Beiker (Eds.), Road Vehicle Automation 2: Springer International Publishing.

Norman, D. A. (2015, September 26). Ready or not, the fully autonomous car is coming. San Diego Union-Tribune, from http://www.sandiegouniontribune.com/news/2015/sep/26/ready-or-not-the-fully-autonomous-car-is-coming/

---------------------------------------------

Erratum: John Lauber has pointed out that we badly mischaracterized the 1988 Airbus 320 crash. The plane was on a demonstration flight, piloted by the chief test pilot. We stated that "Automation and flight crew fought for control, and the autoflight system eventually flew the airplane into the trees." Lauber states that the actual scenario was quite different: "it was the deliberate, willful action of Captain Asseline, action that violated even his own pre-briefed plan for executing a "high alpha" flyby for an air show crowd  that placed the aircraft in such a state of low energy at a critically low altitude that it could not physically clear the tree line at the end of the flyby 'runway.' "

Lauber pointed us to the accident report (http://asndata.aviation-safety.net/reports/1988/19880626-0_A320_F-GFKC.pdf) and that "Captain Asseline was convicted of (as were four others) and received a prison sentence for (he alone) involuntary manslaughter for his role in this accident."

We agree with his critique and are embarrassed by our own sloppiness in repeating the rumours about the incident without refreshing our memories by reading the accident report. We apologize to Lauber and to readers.We have known John Lauber for many years: He is a respected authority on aviation safety. He worked at NASA-Ames, served on the U.S. National Transportation Safety Board, and is now retired as Sr. VP and Chief Product Safety Officer, Airbus .  

Correcting this description makes no difference to the points of the paper: they are unchanged. Had we not included the short description of this incident, nothing else in the paper would have been different. As John Lauber said to us in his email " I think It is unfortunate that this mis-treatment of the Habsheim accident mars an otherwise excellent and insightful paper on a topic of current, rapidly-growing interest."