Originally published in the Financial Times, April 6, 2017.
Tech companies are spending billions to build devices based on a scientific fallacy
Can a robot read your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-reading devices that can interact meaningfully (and profitably) with humans using artificial intelligence.
These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.
Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.
But if you tweak these emotion-matching experiments slightly, the evidence for universal expressions dissolves. Simply remove the lists of emotion words, and let subjects label each photo or sound with any emotion word they know. In these experiments, US subjects identify the expected emotion in photos less than 50 per cent of the time. For subjects in remote cultures with little western contact, the results differ even more.
Overall, we found that these and other sorts of emotion-matching experiments, which have supplied the primary evidence for universal emotions, actually teach the expected answers to participants in a subtle way that escaped notice for decades — like an unintentional cheat sheet. In reality, you’re not “reading” faces and voices. The surrounding situation, which provides subtle cues, and your experiences in similar situations, are what allow you to see faces and voices as emotional.
A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light. Your brain processes this so quickly that the other person’s face and voice seem to speak for themselves. A hypothetical emotion-reading robot would need tremendous knowledge and context to guess someone’s emotional experiences.
So where did the idea of universal emotions come from? Most scientists point to Charles Darwin’s The Expression of the Emotions in Man and Animals (1872) for proof that facial expressions are universal products of natural selection. In fact, Darwin never made that claim. The myth was started in the 1920s by a psychologist, Floyd Allport, whose evolutionary spin job was attributed to Darwin, thus launching nearly a century of misguided beliefs.
Will robots become sophisticated enough to take away jobs that require knowledge of feelings, such as a salesperson or a nurse? I think it’s unlikely any time soon. You can probably build a robot that could learn a person’s facial movements in context over a long time. It is far more difficult to generalise across all people in all cultures, even for simple head movements. People in some cultures shake their head side to side to mean “yes” or nod to mean “no”. Pity the robot that gets those movements backwards. Pity even more the human who depends on that robot.
Nevertheless, tech companies are pursuing emotion-reading devices, despite the dubious scientific basis. There is no universal expression of any emotion for a robot to detect. Instead, variety is the norm.
Copyright © 2017 The Financial Times Ltd.
Image credit: Alex Knight