Being Analog 3 of 3
HUMANS & COMPUTERS AS COOPERATING SYSTEMS
Because humans and computers are such different kinds of systems, it should be possible to develop a symbiotic, complementary strategy for cooperative interaction. Alas, today's approaches are wrong. One major theme is to make computers more like humans. This is the original dream behind classical Artificial Intelligence: to simulate human intelligence. Another theme is to make people more like computers. This is how technology is designed today: the designers determine the needs of the technology and then ask people to conform to those needs. The result is an ever-increasing difficulty in learning the technology, and an ever-increasing error rate. It is no wonder that society exhibits an ever-increasing frustration with technology.
Consider the following attributes of humans and machines presented from today's machine-centered point of view:
The Machine-Centered View
People / Machines
- Vague / Precise
- Disorganized / Orderly
- Distractible / Undistractible
- Emotional / Unemotional
- Illogical / Logical
Note how the humans lose: all the attributes associated to people are negative, all the ones associated with machines are positive. But now consider attributes of humans and machines presented from a human-centered point of view:
The Human-Centered View
People / Machines
- Creative / Unoriginal
- Compliant / Rigid
- Attentive to change / Insensitive to change
- Resourceful / Unimaginative
Now note how machines lose: all the attributes associated with people are positive, all the ones associated with machines are negative.
The basic point is that the two different viewpoints are complementary. People excel at qualitative considerations, machines at quantitative ones. As a result, for people, decisions are flexible because they follow qualitative as well as quantitative assessment, modified by special circumstances and context. For the machine, decisions are consistent, based upon quantitative evaluation of numerically specified, context-free variables. Which is to be preferred? Neither: we need both.
It's good that computers don't work like the brain. The reason I like my electronic calculator is because it is accurate: it doesn't make errors. If it were like my brain, it wouldn't always get the right answer. The very difference is what makes the device so valuable. I think about the problems and the method of attack. It does the dull, dreary details of arithmetic -- or in more advanced machines, of algebraic manipulations and integration. Together, we are a more powerful team than either of us alone.
The same principle applies to all our machines: the difference is what is so powerful, for together, we complement one another. However, this is useful only if the machine adapts itself to human requirements. Alas, most of today's machines, especially the computer, force people to use them on their terms, terms that are antithetical to the way people work and think. The result is frustration, an increase in the rate of error (usually blamed on the user -- human error -- instead of on faulty design), and a general turning-away from technology.
Will the interactions between people and machines be done correctly in 50 years? Might schools of computer science start teaching the human-centered approach that is necessary to reverse the trend? I don't see why not.
CHAPTER 7: NOTES
Being analog. Omar Wason suggested this title at the conference "Jerry's Retreat," Aug. 19, 1996. The instant I heard it I knew I wanted to use it, so I sought his permission which he graciously gave.
Sections of this chapter originally appeared in Norman, D. A. (1997). Why it's good that computers don't work like the brain. In P. J. Denning & R. M. Metcalfe (Ed.), Beyond calculation: The next fifty years of computing. New York: Copernicus: Springer-Verlag. Many of the ideas made their original appearance in Norman, D. A. (1993). Things that make us smart. Reading, MA: Addison-Wesley.
I apologize to readers of my earlier books and papers for this repetition, but what can I say: the argument fits perfectly here, so in it goes.
It requires a biblical name to fool you. Erickson, T. A. & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning and Verbal Behavior, 20, 540-552. The paper that started the quest for understanding why people have trouble discovering the problem with the question, "How many animals of each kind did Moses take on the Ark?"
Reder and Kusbit followed-up on the work and present numerous other examples of sentences that show this effect. Reder, L. M. & Kusbit, G. W. (1991). Locus of the Moses illusion: Imperfect encoding, retrieval, or match? Journal of Memory and Language, 30, 385-406.
Humans versus computers. This section was originally printed as Norman, D. A. (1997). Why it's good that computers don't work like the brain. In P. J. Denning & R. M. Metcalfe (Ed.), Beyond calculation: The next fifty years of computing. New York: Copernicus: Springer-Verlag.
The one best way. Kanigel, R. (1997). The one best way: Frederick Winslow Taylor and the enigma of efficiency. New York: Viking.
The question is at what price. For an excellent, in-depth analysis of the price paid in the name of efficiency, see Rifkin, J. (1995). The end of work: The decline of the global labor force and the dawn of the post-market era. New York: G. P. Putnam's Sons.
His book, The principles of scientific management. Taylor, F. W. (1911). The principles of scientific management. New York: Harper & Brothers. (See note 7.)
Taylor stated that it was necessary to reduce all work to the routine. Taylor's work is described well in three books. First, there is Taylor's major work: Taylor, F. W. (1911). The principles of scientific management. New York: Harper & Brothers.
Second, there is the masterful and critical biography of Taylor, one that illustrates the paradox between what Taylor professed and how he himself lived and acted: Kanigel, R. (1997). The one best way: Frederick Winslow Taylor and the enigma of efficiency. New York: Viking.
Finally, there is Rabinbach's masterful treatment of the impact of changing views of human behavior, the rise of the scientific method (even when it wasn't very scientific), and the impact of Taylor not only on modern work, but on political ideologies as well, especially Marxism and Fascism: Rabinbach, A. (1990). The human motor: Energy, fatigue, and the origins of modernity. New York: Basic Books.
Also see "Taylorismus + Fordismus = Amerikanismus," Chapter 6 of Hughes, T. P. (1989). American genesis: A century of invention and technological enthusiasm, 1870&emdash;1970. New York: Viking Penguin.
A repair crew disconnects a pump from service in a nuclear power plant. This is an oversimplified account of some of the many factors of the Three-Mile island Nuclear Power accident. See Kemeny, J. G., et al. (1979). Report of the President's Commission on the Accident at Three Mile Island. New York: Pergamon. Rubenstein, E. (1979). The accident that shouldn't have happened. IEEE Spectrum, 16 (11, November), 33-42.
A hospital x-ray technician enters a dosage for an x-ray machine, then realizes it is wrong. See Appendix A, Medical devices: The Therac-25 story, in Leveson, N. G. (1995). Safeware: System safety and computers. Reading, MA: Addison-Wesley. This book also includes nice appendices on The Therac-25 story (x-ray overdosage), the Bhopal chemical disaster, the Apollo 13 incident, DC-10s, and the NASA Challenger. And the various Nuclear Power industry problems, including Three Mile Island and Chernobyl.
There are better ways of developing software. See Nancy Leveson's book Safeware: System safety and computers. Reading, MA: Addison-Wesley (1995). For a scary discussion of the failures of system design, see Neumann, P. (1995). Computer-related risks. Reading, MA: Addison-Wesley.
If the Navy would follow formal procedures and a strict hierarchy of rank, the result would very likely be an increase in accident rate. See Hutchin's analysis of crew training and error management in ship navigation: Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. See also La Porte, T. R. & Consolini, P. M. (1991). Working in practice but not in theory: Theoretical challenges of high-reliability organizations. Journal of Public Administration Research and Theory, 19-47.
These issues are well treated by Robert Pool in both his book and the excerpt in the Journal Technology Review:
Pool, R. (1997). Beyond engineering: How society shapes technology. New York: Oxford University Press. Pool, R. (1997). When failure is not an option. Technology Review, 100 (5), 38-45. (Also see http://web.mit.edu/techreview/).
Attributes of humans and machines taken from today's machine-centered point of view and a human-centered point of view. From Norman, D. A. (1993). Things that make us smart. Reading, MA: Addison-Wesley.