Update: Computing Emotions


Rosalind W. Picard is the founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Laboratory and co-director of the Things That Think Consortium, the largest industrial sponsorship organization at the lab. She holds a bachelor’s in electrical engineering with highest honors from the Georgia Institute of Technology, and master’s and doctoral degrees, both in electrical engineering and computer science, from the Massachusetts Institute of Technology (MIT). She was honored as a fellow of the IEEE in 2005. She is the author of over 100 peer-reviewed scientific articles in multidimensional signal modeling, computer vision, pattern recognition, machine learning, and human-computer interaction.

◊ ◊ ◊ ◊ ◊

“Affective computing has gone from a crazy fringe activity to one with professional recognition, dozens of workshops and a biannual international conference devoted to the subject, an encyclopedia entry, and an international professional society developing a journal” according to Rosalind Picard, founder and director of the Affective Computing Research Group at the MIT Media Laboratory. Affective computing is computing that relates to, arises from, or deliberately influences emotion.

In October 2000, the Ethix Conversation featured a discussion with Picard regarding her work. At the time the application we talked about was having the computer respond to the user in a way that was more “humanlike” than a machine, able to understand whether the user was frustrated, bored, or wanted a few more clues. Much has happened since.

“This has been a very fruitful period,” she said. “And we are making incredible progress with the research. Early results are starting to leave the lab and be used in business, and the present research has very exciting potential.”

One early application is for call centers. In 2006, U.S. call centers spent $400 million for software to aid the understanding of customer emotions in their interaction with the centers. “We have made early progress in distinguishing between someone speaking loudly for projection and loudly because they are angry. And we can go through speech and pick up on apparently small negative clues that together may give a different picture than what might be heard just by listening to the words.”

Another potential application is in deciding which new products to bring to market. There have been two significant problems with using focus groups to date: They have been very dependent on the coordinator of the group, and they have not been reliable in predicting product success. We are able to design new kinds of product interactions where you can simultaneously measure facial expressions such as delight or disgust, physiological indications of anticipation and excitement, and behavioral measures of product use, on top of the self-report subjective measures traditionally captured by focus groups.

“There is a brand new, exciting potential application we have been working on. People with autism tend to struggle with reading cues from those they are talking with. As a result, they don’t know if they are being heard, confusing the listener, interesting the listener, etc. Our goal is to provide a support system for these people, using the computer to pick up the clues and then share the results with the autistic person,” Picard explained.

However, she explained, this is a problem more difficult than getting a computer to compute every chess move. In chess, there are 20 possible opening moves, and an explosion of possibilities with each new move makes the analysis and look-ahead problem very complex. Chess does not require an instant response. In reading faces, there are 44 muscles leading to about 10,000 combinations of facial expressions that must be read and responded to in milliseconds. This makes facial response reading one of the hardest problems in computer science today, according to Picard. An autistic boy from India, Tito Mukhopadhyay, one of many who has trouble reading facial movements has written, “Faces are like waves, different every moment. Could you remember a particular wave you saw in the ocean?”

To simplify this problem, Picard’s laboratory, building on the doctoral work of Rana el Kaliouby at the University of Cambridge, has identified six different reactions that the computer can read in real time from the facial expression:

Agreeing • Disagreeing • Confused • Interested • Thinking • Concentrating

The lab is now experimenting with different ways to provide a person with the information. They are experimenting with a connection that provides an audio input, whispering one of the above into the ear of the autistic person. The problem with this approach is its distraction in a conversation. A second approach is some kind of vibrating signal that can be felt. They are also trying a small LED display in the glasses of the person indicating “green” (keep talking), “yellow” (slow down), or “red” (stop talking and ask for clarification). The best way to do this is still being developed.

This project is one of many that is part of a new center that Picard is starting at MIT using affective and other technologies to try to help people with disabilities. As the technology develops, it may also aid those managers and executives who seem to be “response challenged” for other reasons.