Why not just use a display screen for the robot's face?

I just finished Emotional Design, and something puzzled me: why do robot creators go through the trouble of making complex motors that allow simulating a face, when they could just attach an LCD monitor to a swivel and render an animated face onscreen, which would be far easier?

Using a monitor would open up all kinds of additional uses: DVD playback, gaming, input entry (if it is a touch screen), TV, wireless Internet access, and so on. Plus, it would be visible in the dark, which is always nice :)

I'm sure researchers have thought of this, so I was wondering why they pursue the mechanical route, which seems to be so much more difficult?

You have an advanced case of featuritis. Hey, if we made this robot's face a display screen, we could also show the day's news, the time of day, play games, and sing songs on it! Isn't that cool?

First, let's ask why a robot even needs a face. I believe that robots should only have faces if they truly need them. Entertainment robots will have faces, maybe, but others? Why bother? Now, I have argued that robots will need robot-emotions for the same reason we have human-emotions -- emotions are essential to help us judge events and objects in the world, to prioritize our actions, and to make sensible decisions. Moreover, emotions also trigger the body's musculature, so we can see when someone is struggling, tense, relaxed, etc. So emotions play an important communicative role.

What this means is that if I am to interact with a robot, I need some way of understanding how well it is doing. Did it understand me, or is it confused? Is it doing the assigned tasks easily, without difficulty, or are there problems? Is the robot in good condition, or is it perhaps low on energy (battery power), or perhaps overdue for servicing, or perhaps carrying more than its designated weight limit? Here is where emotional reactions by the robot could be useful – if they are natural and appropriate, and if they are in a form I can interpret.

For emotional communication to be effective, it must be natural and functional. I don't want some robot designer saying, "hey, let's make the robot smile and frown." Instead, I want to be able to tell when it is straining to do a task, when it is confused, when it is thinking. I want it to know when it is afraid that it might fall down the stairs, or run out of power, or when it fears that the item it is carrying might be too heavy, so it might damage its arms or motors.

These will be natural side effects of the robot's functioning. We won't build fake emotions -- we will let the real ones be visible and audible. Notice how useful it is to hear the vacuum cleaner motor change pitch when the suction intake is blocked. Nobody programmed the vacuum cleaner to do that, it is a side effect, but a most useful one. That's what I am talking about.

The goal is natural products, that behave in natural, informative ways. Putting a face on a display screen is unnatural. It won't be the same thing. It is forced and artificial. The designer will probably think there are 6 classes of emotional displays (sad, happy, ...). Bad mistake. There are an infinity of them -- just as there are an infinity of sounds your automobile makes when it is working smoothly or not, on a bumpy road, in the rain, ... Let function dictate display.

This is why the robot, if it is to have a face, should have a real one -- mechanical. The eyes should be real -- where the TV cameras are located (so it might have one, two, or even three eyes). The ears are where the microphones are located, and the mouth where the sounds come out. Sure, arrange these in a human-like configuration. Any facial expressions should result from changes in the activation of underlying motors, levers, and tendons. Cupped ears and raised eyes irises open wide) indicate attention. Tenseness of the tendons and motors indicate preparation for action -- which in a person signals alertness with some level of anxiety. Eyes darting here and there indicate problem solving, and eyes up (or eyelids closed) might indicate thinking (shutting off visual input to eliminate distraction). And so on. natural responses, with natural interpretations.

If a robot is to use emotions to communicate its underlying state, it should do so naturally, as a byproduct of its operation. Let its motors make noise, when strained. let its eyes search around when confused. let its body be tense (motors all ready to go) when anxious, and let it be relaxed (motors turned off, tendons relaxed), when no problems exist. It doesn't even need a face, but if it is to have one, let it be natural, not some artificial drawing displayed on a screen with some designer's interpretation of what it might mean to be happy or anxious, confident or perplexed. These emotions would be fake and, as a result, fail to communicate the true state of the robot and perhaps worse, communicate the wrong state.

[For Extra Credit: Many of us have encountered changes in the pitch of a motor. The power drill changes its pitch when the drill encounters resistance and the pitch of the vacuum cleaner motor changes when the air intake is blocked. We all can recognize these sounds, but how conscious are we of the their nature. So, answer this: does the pitch of the vacuum cleaner motor go down or up when the airway is blocked? Explain why, in 25 words or less (It takes me 23).]