Paper presented at 2013 Society for Information Display conference, Vancouver, Canada (May 23, 2013) under the title "The Next Touch Evolution. Advancing the Consumer Experience in Other Realms: Tasks and Tough Environments"
**Opportunities and Challenges **
For Touch and Gesture-Based Systems
Don Norman* & Bahar Wadia **
* Nielsen Norman group, Palo Alto, California and UICO
** UICO, Elmhurst, Illinois
The Technology Is the Easy Part
Yes, getting the technology to work is hard, but the really hard part is getting the human-system interaction right, making it easy for people to use the systems.
Here are the issues. Touch and sensing technology is becoming more and more popular, whether it is on mobile telephones and tablets, navigation systems, or even cooking appliances.
These give great opportunities, and of course, great opportunities also pose great challenges. Some are technical, but more and more they are interaction and design challenges - how to ensure that the capabilities of the technology are well matched to the needs and capabilities of the people who use them.
As companies like UICO make projected capacitance sensing robust under harsh environmental conditions (cold, heat, rain, snow, through gloves, under heavy vibration, etc.) the range of application domains will expand. These place extreme pressures on decent interaction design.
The Opportunities
As the technology of sensing advances, numerous opportunities for interaction emerge (Lee and Norman, 2010). Today, we talk of the "interface" between people and products with the assumption that it is a physical presence, a panel or otherwise visible structure with which people interact. In fact, calling something a "touch" or "multi-touch" interface implies a physical structure that is intended to be touched. However, as we move forward, the options expand beyond mere physical touch. We might allow interaction at any location on a device, or even without touching. The new design considerations must apply to interface inputs that use touch or not (touchless) with outputs that can involve any medium or sensory modality.
Touch Interfaces
- Touch
- Gesture
- *Physical Control
- Wearable
Touchless Interaction
- Proximity / Location
- Sound
- Speech
- Person Identification (from voice, facial recognition, body size, gait, movement style, ...)
- Eye gaze
- Physiological measurements
- Specialized devices (e.g., RFID)
- Novel sensors
Remotely Operated
- App on Smart Display Device (mobile phone, tablet, ...)
- Special Purpose Controller
- Gestures Detected by Remote Camera (With Powerful Lens)
Feedback
- Text
- Images
- Moving images (animations or videos)
- Sound
- Speech
- Feel (Haptic). With remote operation, wearable devices, and Touchless interaction, haptic feedback might be essential.
Group Interaction
All of the above issues (touch, touchless, feedback issues), coupled with determination of who is interacting. Consider the difficulty of distinguishing a four finger touch gesture by one person from a two-finger touch by one person, with the other fingers coming from one or two other people.
Actually, if we were just talking about touch, there would be little problem: signify the places where touch is possible and then register where the touch took place. a more difficult problem comes with false (accidental) touches and erroneous selections (error). What then? If a touch is inappropriately made and the screen changes dramatically, how does the person know (the role of feedback) and then, how does the person recover?
With touchless interfaces, multiple design issues arise: where is the region being sensed? What are the range of possible inputs? How does the product communicate the possibilities to users?
Remote Actuators, Sensors, and Displays
In many instances, the controls, input sensors, and feedback will no longer be (only) on the product itself. remote actuators and sensors allow people to interact at a distance. Sometimes these will be by special-purpose devise, sometimes through their own, personal smart display devices. Although in many ways, remote devices do not introduce new principles of interaction, they do complicate the design issues. When operating something remotely, the design of feedback is even more critical than ever so that the person is continually aware of the state of the system, of the receipt of their inputs, and then of the system's response and the outcome.
More exciting possibilities: more demanding design challenges.
The Interaction Design Challenge
The main questions that people have in using a product are:
- What can I do?
- Where? How?
- What happened?
- How do I get back (undo)?
- Help in understanding how the product works.
Points 1,2, and 4 require clear and consistent visible clues - called affordances and signifiers in the human-computer interaction world. Point 3 is solved through feedback. And point 5 requires a clear, coherent conceptual model of the operation of the product, which must be provided through clues within the design itself (Norman, 2013).
But there is more: People already have considerable difficulty in using multi-touch displays, sometimes by accidentally touching the screen while holding the product (a special problem with touch-operated e-readers), sometimes by missing the designated area for operation, and of course by failures to remember the required operation or by gesturing ambiguously or imprecisely. What happens in harsh, rugged conditions?
We already have difficulties with target size because the same sized target that works well on a traditional display screen and mouse is no longer appropriate for use with a stylus or a finger. What about when cloves are in use - speaking of "fat fingers," gloves are really fat. What about in the rain where visibility might be obscured? And what happens when the device is being used in an environment that is subject to extreme vibration - how do we specify targets and identify inputs when everything is moving about?
Now imagine that al of the above, individual complexities are combined: what happens when we wish to use a rich alphabet of complex gestures in the rain and cold, where the worker is wearing gloves, and everything is vibrating?
Design Tools
In the design of interacting systems, there are numerous design principles. Four are of particular significance for touch and gesture-based systems: affordances, signifiers, feedback, and conceptual models. Let us now examine each in turn.
Affordances and Signifiers
Affordances are physical structures that enable interactions. Thus the affordances of glass afford visibility but not passage. Small devices afford many opportunities, including lifting, throwing, using them to poke and probe, as well as to do the assigned function. Affordances are a relationship between the actor (person or machine, some times an animal) and the device. Affordances are more important for physical controls (such as a handle or knob) than for touch devices: with a touch device, the main affordance is that of touchability. For most products of relevance to this article, therefore, although designers love to speak of them, affordances are of little or no interest.
To overcome the limitations of affordances, Norman (2010, 2013) introduced the concept of a "signifier," a perceivable (usually visible) signal of the location and form of the possible input interaction. Signifiers are of critical importance for these new interfaces because without them, the person will seldom be able to operate the device properly or, even if some operations are done, not be able to take full advantage of the rich possibility offered by more advanced, but invisible, modes of interaction.
Today, signifiers are badly done with multi-touch devices. With touchless devices, the signifiers, if show at all, tend to be diagrams or short animated sequences visible on some associated display. This is an area in deep need of standards.
Feedback
How does a person know that the input has been detected and understood by the system? The answer is feedback To be effective, feedback must be immediate (less than 100 msec., ideally less than 50), informative, and intelligible. The traditional beeps that signify that a touch as been received are inadequate to signify that a more complex multi-touch or gestural input has been received and understood appropriately. In design, too little attention is paid to the quality and nature of feedback with complex interaction modes. Feedback might also incorporate information about how to reverse the operation if the person believes that the input was incorrectly interpreted or that it was erroneous. Proper, rich feedback is essential to make it possible for people to detect and correct errors, regardless of whether the error is with the system or the person.
Conceptual Models
People understand how things operate by forming mental models of their principles of operation. The models may be sketchy, approximate, and even erroneous, but they guide the person in their actions. It is up to the designer to provide the information appropriate for the formation of appropriate models.
Conceptual models are not needed during normal operation. But they are essential in two situations:
- For learning
- When things go wrong
People can often use devices quite skillfully without any underlying understanding of how they work. But when they come upon a novel situation, for either of the two reasons above (they never learned this aspect or something has gone wrong), but unless the device itself provides elaborate assistance to instruct the person about the alternative actions in these cases, the only way they can figure out what to do is through a conceptual model of how it operates. Note that appropriate feedback is one of the essential requirements for the development of an accurate conceptual model.
The Need for Standards
Gestures can be complex, using a large vocabulary involving multiple fingers, taps, and movement in all sorts of ways: up, down, left, right, circular, tapping, with one, two, three, or four fingers. Add full body motion, speech, eye gaze, posture, etc. to the range of things being sensed and it becomes a daunting challenge for people to learn the required actions and then to remember them at the appropriate times and execute them in a manner that the system can interpret properly. To compound the difficulties, competing companies have developed their own language of gestures and interaction, sometimes for the purpose of product differentiation, sometimes in order to navigate the complex thicket of copyrights and patents. The result is a complex web of competing interaction paradigms produced by competing manufacturers, causing great confusion among people who have to use the products from different vendors. Although the primary offenders are the big three - Apple (iOS), Google (Android), and Microsoft (Windows and Phone 8), others are equally guilty.
When the touch technology moves to other vendors, what will the result be?
The field is in grave need of standards. This is especially true as we move toward industrial applications, where the resulting confusion can lead to expensive errors and possible injury.
Conclusion
The world of sensors and displays is offering new opportunities for human-system interaction by both individuals and groups. New opportunities: New challenges.
References
W. Lee, W. and Norman, D. A. (2010). "Modern interface design: Intimate interaction," Innovation, pp. 46-50, 2010.
Norman, D. A. (2010). Living with Complexity. Cambridge, MA: MIT Press, 2010.
Norman, D. A. (2013). The Design of Everyday Things, Revised and Expanded. New York; London: Basic Books; MIT Press (British Isles only), 2013 (October).