The Human Side of Automation
Keynote talk presented at the Automated Vehicles Symposium, San Francisco Airport, 2014
I am a fan of the completely automated automobile. When it arrives, it will have the benefits that are widely known: dramatic reduction of accidents, injuries and deaths; more comfortable, more productive, and more enjoyable travel; increased efficiency; and so on. The problems arise during the transition period, when many components of the driving experience are automated, but some are not so that the driver is expected to maintain surveillance, ready to take over if unexpected difficulties arise. Over fifty years of studies show that even highly trained people are unable to monitor situations for long periods and then rapidly take effective control when needed. This alone is a major difficulty to be faced during the transition period from partial to fully automated driving, made considerably worse because the normal driver is not well trained while the reaction times required are measured in seconds, not the minutes available in most industrial and aviation situations. As we move to fully automated vehicles, it is the transition period that is the most difficult. My emphasis in this paper is on what we must do between now and then.
Let me start by summarizing my conclusions. The technological requirements for self-driving cars are extremely complex, and although we are now able to succeed in a very high percentage of the situations, those last few percentages contain the most difficult, the most daunting challenges. What seems to have been ignored in this technological push for full automation is the human side during the transition period. The fields of human factors and ergonomics, of human-machine interaction, and of cognitive engineering (three different names for extremely similar and overlapping constituencies) have long studied how people and machines can work together. Many lessons have been learned, usually the hard way, in domains where there already is high automation, most notably in commercial aviation. Instead of applying all of this knowledge, we seem to be repeating the errors of the past.
California and other states now allow automated cars on the highway. But they require human overseeing. That is, they require a trained driver always to be available, sitting in front of the controls, ready to take over if trouble arises. Sound sensible? It isn't.
Many years ago I wrote an article called "The 'problem' of automation" in which I argued that automation was most dangerous when it was mostly there (Norman, 1990). Why? Because the human supervisors would learn complacency. They would expect the automation to do its job, and for literally hundreds or even thousands of hours of usage, they would be correct. However, when automation failed, it would come as a surprise, giving little warning to the unsuspecting observer, who would then have to struggle to get back in the loop, to diagnose the problem, and to decide what action to take. My article wasn't the first. Almost a decade earlier, Lisanne Bainbridge (1983) warned of many related issues. Today, roughly 30 years later, the messages of these - and numerous other papers from the literature on safety and human factors - still apply.
In aviation, the well-known motto is that automation takes over the easy parts, but when things get difficult, when it is needed most, it gives up. But in aviation, when problems arise, the airplane is usually high in the sky, perhaps 6 miles or 10 KMs, high, so it can take several minutes before it will crash. Moreover, the pilots are extremely well trained in high quality simulators, in many cases every six months. They usually figure out the problem and take corrective action, although in some cases not quickly enough to avoid all damage to the aircraft or injury to passengers.
In the automobile, when problems arise, the driver has one to two seconds to react, and drivers are not well trained on how to react to unexpected emergencies. Pilots are continually being trained and tested against all known possible incidents. Drivers usually are seldom trained in high quality simulators and, moreover, once they have passed their license tests, are seldom trained again. At 60 mph (100 kph), the auto travels approximately 90 feet per second (approximately 30 meters/second).
The safer we make things, the more dangerous they become
In aviation, it is called "being out of the loop." In the automobile, we call it daydreaming or distraction. Whatever the name, numerous psychological studies starting in World War II and continuing today demonstrate that when people are asked to supervise with no actions required for long periods, they can't. Attention wanders. People daydream. The literature in psychology and human factors (ergonomics) is called "vigilance": people are not well suited for long periods of vigilance with no events.
As automation gets better and better, then the problems of vigilance increase, for the more reliable the system, the less for a person to do, and the mind wandering begins.
Do not take people out of the loop: have them always know what is happening. How do we do this in a meaningful way? By asking people to make high-level decisions, to continually be making decisions.
Human pattern recognition and high-level statement of goals and plans are good. But here is what we are bad at: the ability to monitor for long periods, to be precise and accurate, to respond quickly and properly when an unexpected event arrives where the person has not been attending. So, have us do what we are good at. Have the automation do what we are bad at. Aim for collaboration, not supervision.
Collaboration, not Supervision
The US National Highway Transportation Safety Agency (NHTSA) has issued recommendations about automated vehicles, following the long-standard tradition of defining levels of automation. NHTSA defines four levels, 0 through 4.
No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls - brake, steering, throttle, and motive power - at all times.
Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles. (NHTSA, 2013).
Level 0 is no automation with level 4 being complete automation. Neither poses particular issues for the purpose of this paper. Level zero is what we have today. Level 4 is perfection where there might not even be any controls for people to operate. Basically, people will all be passengers. The difficulties arise with partial automation, levels 1, 2, and 3.
There are considerable difficulties with these definitions, the most serious being that they follow from the time-honored approach of engineering: automate whatever can be automated, leaving the rest to people. In other words, the machine is the first class citizen whereas people are the second-class participants, asked to pick up the remnants when the first class fails. This puts the onus of final responsibility upon the human, who is therefore more and more forced to behave according to the requirements and dictates of technology, often with little warning. As automation gets better and better, particularly levels 2 and 3, the human becomes less and less in the loop. When difficulties arise, it is unlikely that a person, no matter how well trained, can respond efficiently and appropriately in the one or two seconds available.
But why do we make it so humans are second-class citizens? Shouldn't it be the machines that are second-class? Shouldn't we design by considering the powers and abilities of humans, asking the machine to pick up the remnants? This would be true human-machine collaboration.
Note that there is a wonderful possibility for collaboration. People are especially good at patterns recognition, and dealing with the unexpected, and at setting high-level goals. People are especially bad at dealing with repetitive operations, producing highly accurate, precise actions over and over again, and at vigilance, long periods of monitoring with nothing to do until or unless some unexpected critical event occurs.
Machines are superb at all those tasks people are bad at. So why not devise a collaboration whereby each does what it is best at. The real advantage of this is that the person can always be involved, but at a level appropriate to their abilities. When something goes wrong, the person is in the loop, cognizant of the current state, ready to act.
Many workers in human-machine interaction have complained about the dominance of the levels of autonomy approach. In 2012, the United States Defense Science Board argued that this was an inappropriate way of proceeding:
"The Task Force reviewed many of the DoD-funded studies on "levels of autonomy" and concluded that they are not particularly helpful to the autonomy design process. These studies attempt to aid the development process by defining taxonomies and grouping functions needed for generalized scenarios. They are counter-productive because they focus too much attention on the computer rather than on the collaboration between the computer and its operator/supervisor to achieve the desired capabilities and effects. Further, these taxonomies imply that there are discrete levels of intelligence for autonomous systems, and that classes of vehicle systems can be designed to operate at a specific level for the entire mission." (Department of Defense: Defense Science Board, 2012)
There is a better way to automate things. To do this, we need to change the way we think of the joint operations of people and machines from that of supervision to that of collaboration. The team of Johnson, Bradshaw, Hoffman, Feltovich, and Woods (2014) from the Florida Institute of Human Machine-Cognition developed a successful human-robot collaborative system for the 2014 DARPA challenge to develop humanoid robots for various disaster rescue tasks. Their system demonstrated the virtues of focusing upon human-machine teamwork rather than upon automation. This approach requires us to rethink the interaction paradigm to make optimal use of everyone and everything. The goal is to optimize total performance rather than the performance of the automation.
SUMMARY AND CONCLUSIONS
When Lisanne Bainbridge wrote her 1983 paper about the "ironies of automation," her first irony was that the more we automated, the more skilled and practiced the human operators had to be.
Driving is a very misleading activity. When we first learn to drive, for many of us the activity seems overwhelmingly complex. But after sufficient experience (often measured in months), most of the activities are "automated," which means done subconsciously, without thought or mental effort. Eventually the task of driving is easy enough that people drive at high speeds down congested highways while talking, eating, daydreaming, and even picking up items dropped on the floor. Driving is easy, except when it isn't, and that is when accidents occur. For any individual driver, the chance of an accident is low. For a nation, the number of accidents, injuries, and deaths is astoundingly high. Full automation is indeed the cure.
The dangers lie in partial automation. If drivers daydream and do other tasks while driving now, imagine when the car has taken over many of the components. When the car can take over the driving task for minutes or even hours, over thirty years of research has shown us that people will not be attending. Moreover, because they have not had to use their driving skills as frequently as is required with purely manual driving, these skills will have deteriorated. Although the laws may mandate that people take over when the automation fails, in fact they will be unable to. They will be out of the loop. The automation will indeed reduce the number of accidents and injuries. But when an accident does occur, it is apt to be a big one, with numerous vehicles, a relatively high injury rate (compared with today's crashes), and there might well be a public and political outcry against the increasing automation.
The solution requires a different approach to the design of automation: collaboration. Instead of automating what can be automated, leaving the rest to the driver, we must develop collaborative systems so that the driver is continually involved in giving high-level guidance, thereby always staying active, always being in the loop. The automation must give feedback about the state of the vehicle and the state of the automation itself. As automation compensates for events, for other drivers, for road conditions, and for its own state, the automation has to inform the driver, most critically, telling the driver when the automation is nearing the limits of its ability. This information has to be resented in a manner that is natural, not requiring continual attention. Too many signals are worse than not enough, for they annoy and distract. There are many ways to do this without overwhelming the driver. Examples include today's cues about lane departure through haptic vibrations of the steering wheel or on the seat. The pulsing beeps that increase in pulse rate as the car gets closer and closer to an obstacle in the path of travel (usually while parking) gives distance information in a readily interpreted fashion (but beware of too many items beeping away at the driver). Do we need more research? Yes. But we also know a lot about how to proceed.
Consider such things as changing the way we control automobiles to let the human give high level guidance about where to go, where to turn, and how fast to travel but letting the automation provide the precise control signals about how much (and when) to steer, brake, and accelerate. Many drivers will object to loss of final control, so care has to be taken to keep people in overall control, but the principle here is that people are good at the high level supervision, so let them do it. Machines are good at precise, accurate control, so let them do that. This way, until the machines can do everything, the person is always in the loop, always exerting high-level control. Instead of calling people into the situation only when it is too late, have them there all the time.
Full automation is good. But the path toward full automation must traverse the dangerous stages of partial automation, where the very successes of automation may make human drivers less able to respond to the unexpected, unavoidable imperfections of the automation. The appropriate design path to deal with this requires us to reconsider the driver and automation as collaborative partners.
REFERENCES
Some overview readings
Bainbridge (1983). A classic paper: Ironies of automation.
Casner, Hutchins and Norman (2014). The challenges of partially automated driving.
Flemisch et al. (2003): The H-Metaphor as a Guideline for Vehicle Automation and Interaction. (Also discussed in Norman, 2007.)
Johnson, Bradshaw, Feltovich, & Woods (2014). Seven Cardinal Virtues of Human-Machine Teamwork: Examples from the DARPA Robotic Challenge.
Norman (2007). Design of Future Things.
Papers referenced in the article
Bainbridge, L. (1983). Ironies of automation. Automatica 19(6), 775-779.
Casner, S. M., Hutchins, E. L., and Norman, D. A. (2014: submitted for publication). The challenges of partially automated driving.
Department of Defense: Defense Science Board. (2012, July). Task Force Report: The Role of Autonomy in DOD Systems. Washington, DC: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. http://www.fas.org/irp/agency/dod/dsb/autonomy.pdf
Flemisch, F. O., Adams, C. A., Conway, C. S. R., Goodrich, K. H., Palmer, M. T., & Schutte, P. C. (2003). The H-Metaphor as a Guideline for Vehicle Automation and Interaction (NASA/TM--2003-212672). Hampton, Virginia: NASA Langley Research Center. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040031835_2004015850.pdf
Johnson, M., Bradshaw, J. M., Hoffman, R. R., Feltovich, P. J., & Woods, D. D. (2014). Seven Cardinal Virtues of Human-Machine Teamwork: Examples from the DARPA Robotic Challenge. IEEE Intelligent Systems, 29(6, November/December), 74-80. http://www.jeffreymbradshaw.net/publications/56. Human-Robot Teamwork_IEEE IS-2014.pdf
Kessler, A. (2014, October 5). Technology takes the wheel. New York Times. http://www.nytimes.com/2014/10/06/business/technology-takes-the-wheel.html?_r=0
National Research Council. (2014). Complex Operational Decision Making in Networked Systems of Humans and Machines: A Multidisciplinary Approach: The National Academies Press. http://www.nap.edu/openbook.php?record_id=18844
NHTSA (National Highway Traffic Safety Administration) (2013). U.S. Department of Transportation Releases Policy on Automated Vehicle Development. Retrieved from http://www.nhtsa.gov/About+NHTSA/Press+Releases/U.S.+Department+of+Transportation+Releases+Policy+on+Automated+Vehicle+Development on September 21, 2014
Norman, D. A. (1990). The "problem" of automation: Inappropriate feedback and interaction, not "over-automation". In D. E. Broadbent, A. Baddeley & J. T. Reason (Eds.), Human factors in hazardous situations (pp. 585-593). Oxford: Oxford University Press.
Norman, D. A. (2007). The Design of Future Things. New York: Basic Books.
The ideas presented here are the result of several decades of study of aviation and automobile safety. In particular, I wish to thank Ed Hutchins at University of California, San Diego and Steve Casner of NASA, Ames for their contributions to my thinking. Licensed under the Creative Commons Attribution, Non Commercial 4.0 International License.
A slightly revised version of this article will be published by Springer-Verlag and will be available at www.springerlink.com. The citation will be:
Norman, D. A. (2015). The human side of automation. In G. Meyer & S. Beiker (Eds.), Road Vehicle Automation 2: Springer International Publishing.