Feedback

Rosalind “Roz” Picard: Computers That “Feel Your Pain”

Rosalind (Roz) Picard is founder and director of the Affective Computing Research Group, and associate professor of media arts and sciences, at the Massachusetts Institute of Technology (MIT) Media Laboratory. After receiving her bachelor’s degree in electrical engineering with highest honors from the Georgia Institute of Technology in 1984, she worked as a member of the technical staff at AT&T Bell Laboratories from 1984 to 1987, designing VLSI chips for digital signal processing and developing new methods of image compression and analysis. She earned her master’s degree (1986) and doctorate (1991) both in electrical engineering and computer science from MIT. She joined the MIT Media Lab faculty in 1991. Picard is the author of over 80 peer-reviewed scientific articles on pattern recognition, multidimensional signal modeling, and computer vision. She is known internationally for pioneering research into digital libraries and content-based video retrieval. Her award-winning book Affective Computing (MIT Press, 1997) lays the groundwork for giving machines the skills of emotional intelligence. She has consulted for such companies as Apple, AT&T, BT, HP, and Interval, and has been keynote or plenary speaker at dozens of scientific and industry gatherings, including AAAI, HCI, ICASSP, Index Vanguard, Illinois CYBERFEST, WETICE, Future of Health Technology, Club of Rome, and IMAGINA. Her work has been featured in such publications as The New York Times, Scientific American Frontiers, NPR’s Tech Nation, ABC’s Nightline, Time, and Vogue. She is married and lives in Newton, Massachusetts with her husband, son, and seven non-affective computers.

◊ ◊ ◊ ◊ ◊

Ethix: What is affective computing and why is it important?

Rosalind W. Picard: Affective computing is computing that relates to, arises from, or deliberately influences emotion. The idea is to bring the skills of emotional intelligence into the technology domain, paying attention and responding appropriately to people’s emotions.

Right now computers ignore us when we get irritated at them. If you want to personalize something today it’s up to you to find the 80 menus and set all the items every six months when everything changes. If computers were sensitive to when we like or dislike what they are doing, if they could automatically suggest ways that they could be tailored to us and be a bit more proactive in adjusting to us, hopefully our overall stress and aggravation would reduce.

Microsoft Windows has an irritating automatic icon that pops up and says “I see you’re trying to write a letter. Can I help you?” Is this affective computing?

Many people find that assistant very irritating. An important guiding principle is “what would a person do in that situation?” If a stranger burst on the scene and began helping you but was presumptuous and totally blind and deaf to your reactions, you would be irritated. If someone irritated you like that and paid no attention when you scowled, frowned, and yelled, and if they didn’t apologize or go away, you would really dislike them.

This is why a lot of interfaces fail. Reeves and Nass of Stanford University’s communications department have argued that human-computer interaction defaults to a social kind of metaphor. We take with us, into this interface, a bunch of presuppositions and expectations from our human interactions. So we can predict, for example, that if an interface talks but doesn’t listen to you (like the car that tells you the door is still open or the camera that announces in front of the subjects you are photographing that you have not removed the lens cap!), it will fail. These interactions are like a nagging person who doesn’t listen and yet is telling you what to do.

I think we’re giving most people a real turn off with that kind of interface. Of course there are always exceptions. Every time I give a talk and mention how I hate those little computerized office assistants, somebody comes up to me afterwards and says “my sister really likes that thing.” There’s not a one-size-fits-all solution. Just as people like different styles of interaction with one another, so with their interactions with computers. Some people like interaction that is submissive, friendly, and helpful. Others can’t stand it.

I like my friends to be sensitive and responsive to my moods — but not my tools.

And you’re using the word “tool” because you want to be in control?

I guess so. But another concern I have is further blurring the distinction between humans and machines — using terms like “plugged in” and “programmed” for humans and “viruses” and “intelligence” for computers. Is it a good thing to further blur this distinction by saying that machines have “feelings” and “affect”?

Well, it’s a very big question. First of all, to be deceptive in any way is clearly wrong. It is not right to pretend that a machine could feel something or have genuinely human empathy for a user when it is merely playing some script.

What we have seen repeatedly is that even people like MIT and Stanford computer science students — who know how the computer is built down to its atoms and then some — seem to treat the computer as though it has feelings. When these students are presented with information by a computer and the computer says afterwards “rate this computer” on a scale of say 1 to 7 (no face, no voice, no little animated character, just simply the minimal script “rate this computer”). They might give it a 6 — the presentation was really great. But, when they go to another computer and it asks them to rate that first computer’s presentation they give it a 5. They’re slightly nicer to the original one when directly interacting with it; when they’re behind its back (backplane) they’re not quite as nice.

When interviewed later about why they did this, these students first of all denied it: it doesn’t make sense, why would I be nicer? It sounds like I was trying to be nicer to it as if it has feelings and I know it doesn’t. Maybe I was projecting onto the designer of the system or something.

This example is one of two dozen or so experiments that take a classic human-human interaction and replace one of the people with a computer.

People bring various metaphors with them to their interfaces with machines as well as humans. For example, in human interactions, relying on the “specialist” metaphor means that if Al is introduced as a specialist in sparse matrices and David as a specialist in ethics — if we start talking about sparse matrices I’m going to think that Al’s information is more informative and accurate. Experiments with television sets showing the exact same content over a set labeled “news” television (the “specialist”) and another labeled “entertainment and news” television (the generalist) results in people evaluating the “news” version as having much higher accuracy, even though it was the exact same content. Even people who are not blurring anything in their minds are still behaving with a metaphor as if it behooves them to treat machines this way.

I am concerned about the blurring. Ben Schneiderman, a strong critic of anything that pretends to have human-like qualities but really doesn’t, has pointed out that all technology that has tried to duplicate people and nature over the years has been unsuccessful. Cars don’t have legs and the wings of planes don’t flap. Looking at nature is inspiring, but duplicating it may not be the best way to go. I think it’s very important to look at how people and computers interact. It shows respect to the user to try to facilitate what comes naturally to them. However, to presume that the computer has to duplicate a person to handle that interface well is a needless limitation on how we think about design. We’ve got to think beyond that.

My concern is partly that we might treat machines as persons, but I’m much more concerned about the other direction: that we might be treating people as though they are machines.

It’s already happening. This started at least as far back as when mechanical machines became stronger than people. Our metaphors for the mind were then mechanical — gears, pulleys, cables, and so on.

In Affective Computing you say that wise decision-making and creativity as well as our emotional health are largely dependent on being in an environment that feeds back positively into our emotional life. To interact long-term with computer technology that doesn’t acknowledge human emotion could be damaging to us. One way to respond is to make computers more affective — but why not simply build teams of real, affective human beings in the workplace?

Actually we need both approaches. It depends on the task. There are tasks where you just need a tool to get the job done. Sometimes you just sit down with a person and work like crazy to get a job done and there is not a lot of affect even in this human-human interaction. You both sort of function like automatons at that point.
Right now computers ignore us when we get irritated at them… If computers were sensitive to when we like or dislike what they are doing, if they could automatically suggest ways that they could be tailored to us and be a bit more proactive in adjusting to us, hopefully our overall stress and aggravation would reduce.
There’s another entirely different scenario where a new person comes up to a machine and is intimidated or scared. They don’t know what they’re doing, they’re making mistakes. They might need a presence that senses and responds appropriately to their fear and malaise. I don’t think the computer should deceive them that it is equivalent to a caring human being, but it could be much more sensitive and helpful. It is interesting, though, that there have been experiments where through the teletype the computer has pretended to be a physician to the patient as it gathers information. Patients have sometimes preferred that to interacting with a physician.

It can be extremely frustrating to try to carry out transactions on-line if your case is just a little out of the ordinary and you don’t fit into the categories the machine provides. One gets desperate to speak with a real human being.

Our whole work in computer empathy arose when one of my graduate students, Jonathan Klein, was so badly mistreated on an airplane on his honeymoon that he said he could build a computer which would respond better to customers than his flight attendant had. So, he built a computer that, in a very limited way, showed active listening, empathy, and sympathy to frustrated users. It didn’t pretend to really feel anything, it didn’t refer to itself as “I.” It just said things like “Gee, it sounds like that was a crummy experience.”

As people increase the number of hours they interact with machines, or with people through machines, computers could at least be showing the appropriate customer service distant kind of politeness and sensitivity. Apologizing when they keep you waiting and simple things like that would go a long way. The computer itself is not going to pretend to really feel upset. But, we found that when these responses are delivered in a computer-appropriate way, the effect on people is measurable. They choose to return and interact with the system longer and show signs of less stress. There are some real tangible benefits.

Is this the end objective, then, that the whole system of people and computers interacts in a much more effective way?

Exactly. People will be more productive, more useful, with a greater willingness to go back and interact with an affective machine. I think if we could measure stress and productivity with machines, in the same way we measure price and performance, we would see advertising touting not just the price-performance curve, but also a new curve — call it price-productivity or price-peace. Many people would pay more, even with slightly less power on the machine, for a more pleasurable work experience at the end of the day.

What do you see happening by 2020 in terms of computers and human interaction?

Our current affect recognition rates are about where speech recognition rates stood fifty years ago. I don’t think it will take us fifty years to get as far as speech has come as we have vastly better tools now. Yet, speech recognition still has a long way to go. With any significant background noise, like if you talk on a noisy street corner with a cell phone, your speech becomes more stressed and speech recognition drops from the 90% rate under the best conditions down to about 50%, which makes it pretty useless. So, I think we have to be careful not to set expectations too high. We should not expect it to be like interacting with your assistant who can read right away that something went wrong that morning.
We have seen repeatedly that even people like MIT and Stanford computer science students — who know how the computer is built down to its atoms and then some — seem to treat the computer as though it has feelings.
I hope affective computing will not be as juvenile as some of the things we’ve seen so far — adding an engaging little dancing agent, or generating mechanical, inappropriate apologies because we think this is a formula for pleasing clients. People tend to take the message that affect is important and vastly oversimplify its implementation. If not implemented well it can really backfire.

Do you share at all Bill Joy’s concern that computers could take over our world and render humans irrelevant by 2020?

This concern has been around even before Marvin Minsky famously predicted that “computers will so far surpass us in ability that we’ll be lucky if they keep us around as household pets” — No, I’m not in the “we’re going to be household pets” camp.

I welcome the Joy article as a kind of wild speculation. I don’t think there’s any harm as long as you make it clear you are speculating and getting people to think about fanciful future scenarios. The public generally doesn’t consider the potential downside of technology and they need to do that. Many people ask me why I included a chapter in my book on potential concerns in affective computing. “Why are you putting this here and shooting yourself in the foot?” Not to consider this side would be irresponsible.

We can see quite far where this stuff could go but as far as all the possible horrible scenarios the science fiction writers already beat us to it.

Some used to assume that if we just had powerful enough computing we could solve our problems. But, the problems always turned out to be deeper. More computation didn’t necessarily mean wiser decisions.

It’s sort of like saying that if you extrapolate on Deep Blue you will not only have a chess machine that can beat the next several generations of Kasparovs — but once Deep Blue can beat everybody, it will instantly be able to feel its success, respond appropriately to the critics about that success, and show grace and sensitivity! It is ludicrous to think that simply extrapolating computation is going to solve the real hard problems. Actually, extrapolating computation has gotten us slower word processing lately, and slower e-mail. Things have not gotten any less stressful to use with ten times the computational power.

This kind of thinking doesn’t even begin to explain the cases, for example, where a child grows up with 10% of a brain but it isn’t discovered until adulthood. The child has been functioning perfectly intelligently the whole time. It doesn’t explain cases where people have functioning brains, but are totally incompetent in day-to-day interaction. Lately I’ve been learning a lot about autistics and one of the current hypotheses is that their brain doesn’t prune away as many of the neurons early on. Autism is a very complex disorder. You’ve got what are now called autistic savants: people who are fabulous at memorizing patterns (a lot like computers), but lousy at generalizing (a lot like computers). They have fabulous memory capacity for retrieving information but are lousy at making common sense, daily decisions and generalizing information from one situation to the next. So too, getting computers to generalize is very difficult — it’s still a real puzzle in machine learning. We can get them to generalize only within very restricted scopes where we’ve defined their functions.

Does this argue that we ought to be thinking about computers more as aids to people rather than replacements for people?

I don’t think we’ve ever really succeeded in replacing people. Certain jobs have been taken over by machines, but those were perhaps the least human tasks. I do see a problem if society is insensitive to the needs of citizens for jobs. We should be creating ways to employ people whose jobs might be taken over by these machines.

So yes, computers are tools rather than replacements for people, but so many people feel that they are being used and manipulated by these tools today. We engineers think computers are great. We like tools that are challenging, that make us have to work to use them. That’s just something about us. But, most users today are not engineers and resent having to go through so much trouble every few months with upgrades and relearning programs. Users are made to feel ignorant, buying books like Windows for Dummies. That’s a very bad thing to do to people. In fact, it’s the computer that’s the dummy, not the person.

Don’t most people hate calling up businesses and having to wade through long telephone menus when a living person could help us in far less time with far less stress?

We need a “Fed Up” button on the phone for “I’m sick and tired of this menu.” You are feeling that the business no longer regards its customers’ time as valuable.

Sometimes you do get what you want very quickly without any problem and sometimes you don’t. What you’d like is a system that recognizes the difference.

Again, the computer is not taking that kind of feedback right now. That’s where a lot of our work is beginning to focus. We are making progress in equipping systems with the ability for people to express their feedback as to how the system is doing. We’ve added pressure and tension sensors to mice and phones so that while you’re actually interacting with this product and something drives you crazy you can express it right then. We’re trying to give people lots of natural ways to express to the system that something is or is not working well. The idea is for the system to collect ongoing usability information from users. Many people are happy to give that affective feedback especially if they receive benefits, discounts, or whatever. They will go to a little extra trouble to communicate not just that they are irritated, but why. This information should help things operate better.

So you are developing computers that will collect data and create a profile of me.

Actually, the Microsoft system does that already, trying to anticipate what you’re doing, trying to look at patterns of what you’re doing. What it hasn’t done in the past is pay attention to how you are doing these things. It has focused on the what, but not the how. Think of the difference between what you say and how you say it. Both are important but how you say it can be even more important than what you say. Computers have measured what I’m clicking on, what I’m typing, and if I’ve made the same error three times. They haven’t noticed whether I’ve clicked joyfully, with interest and curiosity, or with frustration and great tension.

Just like my grocery store uses my discount card to build a profile of my consumer self, my computer will build a profile of my emotional self. The store then pushes goods toward what they think are my appetites, and the computer will address what it thinks are my emotional needs? I’m not sure I like that.

We engineers think computers are great. We like tools that are challenging, that make us have to work to use them. That’s just something about us… most users today are not engineers and resent having to go through so much trouble every few months with upgrades and relearning programs.
With a close friend, you have enough rapport that you’ve gradually revealed a good deal of information about your emotional buttons over time. However, you might not be comfortable walking into a strange hotel and being greeted: “Hi Dr. Gill! Here is a copy of your favorite newspaper and I ordered your favorite drink to be waiting for you in your room.” It is not really appropriate for strangers to know these things about you.

I’m aghast right now at how much information about people is being gathered (and sold for quite a good price) without their knowledge or benefit. Most of the time people are oblivious that anything is being sensed and collected. We are trying to address this problem, first, by being up front with people about what we’re sensing. Second, we want to give people a means of choosing different forms of sensing — from “nothing sensed unless I deliberately authorize and communicate it,” on one extreme — to “I now know and trust my system so that I no longer need to be overt with it, and will let it sense some more subtle things.”

Several companies are sensing subtle things right now without waiting to build any trust in people and I think that’s very risky. We are trying to be up-front with people in the beginning: “hit this thing if you’re upset, otherwise nothing was sensed.” At the end you have the right to edit out any of this information before it gets transmitted to others. Otherwise, it stays encrypted on your side.

Another issue is the storage of that information. Could all of the important stuff be processed in real time with nothing needing to be stored? We’re trying to develop on-line recognition algorithms so that no personal data has to be recorded in any form that would compromise privacy. If we really have to store a profile, could we store it in such a way that it would be meaningless to all but one or two expert decoders?
We are making progress in equipping systems with the ability for people to express their feedback as to how the system is doing.

Of course, people react differently. I like it when Amazon.com says “I think you’d like these five books” because they’re usually right and it’s helpful. David feels intruded upon and prefers to browse the bookstore by himself.

One of our big messages is that there is no “one-size-fits-all” in these matters. Now, how does a person figure out the difference between what Al wants and what David wants? A good bookstore customer service person would pay attention to how you respond to their inquiries, suggestions, and so on. Then, they would figure out to give Al suggestions when he walks in, but just smile at David. Maybe among all book customers there are twenty different styles of how people look for and choose books to purchase and maybe there’s a new style added every three months or so. They can’t be brittle and fixed; they have to adjust. This is the way affective interfaces must be. They are based on give and take. We’re trying to facilitate that give and take.

Will a good pastor use affective computing some day to discern the opportune moment to take the offering — when the congregation is emotionally oriented to give generously?

That’s nothing compared to the telemarketers who would call on everybody with a cell phone when they’re in a good mood when they’re more likely to buy. Those video surveillance cameras for your safety in the garage could potentially be sensing that a person is walking happily to their car and now is a good time to ring them up and make that deal.

How far does all of this go? I often feel better when someone pats me on the shoulder. Will future computers have arms that will give me this occasional pat?

One of my former students has described building a little thing that reaches out and pats you. Touch is incredibly important especially in the medical field where it can make a difference in whether people survive or not. We’re looking not at how to replace the person, but how, in the absence of, or in addition to the person, there might be a way to provide some of this support in a computer-appropriate way. Again, we are not trying to give cars legs and airplanes flapping wings but, inspired by the amazing things that legs and wings do, we are trying to think of similarly amazing things that maybe machines could do. We have things in the lab that you touch; we’ve been building a lot of tangible interfaces. We’ve been paying a lot of attention to touch lately.

Could the computer sense that you’re stressed and automatically turn down a speaking and travel opportunity for you?

No, I don’t want it taking control from me. And, stress is not the only criterion these decisions should be based on. However, as I build trust with an assistant I’m willing to delegate a little bit more of the things that are predictable. I trust my filters to do certain tasks for me, but I’m not willing to relinquish control of my schedule. My assistant will keep my calendar, but only with lots of feedback from me. What I do look forward to is when the computer assistant really is pleasant to interact with — not “friendly” so much as savvy about what pleases or displeases me. If I repeatedly frown at its behavior then I want it to consider how to change that behavior, and take steps to improve. We’re getting closer to moving this burden off of the user and onto the system.

The high-tech field you work in provides hundreds of opportunities to go off and work for a dot com and become a billionaire and yet you stay here at the university. Why?

I do get a lot of calls for start-ups. Even when I tell people I’m not interested they still call, “well how about just this minimal commitment on our board.” We’ve had a lot of start-ups in our family, we’ve been incredibly blessed with success with the ones we’ve had, so it’s not like I’ve been totally uninvolved. But, I have no desire personally to go and pursue one right now.

That might change, but I’ve said no to a lot of really exciting things, lots of dollar signs and big names. My time and interests are quite fulfilled right now as a wife and mother and a professor at MIT. I frankly would rather be mentoring students and working with people trying to help shape the future of technology than out there chasing venture capital, going to industry shows, and getting products out the door. I do consult for companies, but only when the product or project really interests me for its own sake, intrinsically. I don’t think that doing it for the dollar is satisfying. I’ve never seen money buy satisfaction.

Universities have been criticized for the influence of corporate giving on their research agenda. The allegedly traditional independence and pursuit of knowledge of the university appears threatened. Is the relationship between the corporate world and the university a problem?

I think most corporate funding tends to be good. To depend just on the government is not good. Corporate relationships tend to make our research more relevant, more in touch with real needs out there. The MIT Media Lab has been 80% industry funded and 20% government funded; just the opposite of most of academia. That said, I have personally urged our director to turn down certain sources of money. We also have to be careful what we promise up front. We work mostly on “charm money” — money that doesn’t come with a laundry list of promises; rather we work with the donor to try to develop some things of mutual benefit and interest.

Another challenge in higher education is specialization and separation of departments and areas. Do MIT Media Lab techies have any interaction with sociologists, historians, philosophers, poets and others who might provide alternate critical perspectives on your projects?

Most technology people don’t seek out such interaction. In the Media Lab we mix the sciences and the arts, but the arts we mix in are more design arts rather than the social critics and others you mention. Sherry Turkle has sought us out, with her sociology and psychology background, but more as somebody who wants to understand rather than critique technology. She is interested in the effect of our wearable computers and the effects on people when they interact with computers that manifest emotional abilities. Aaron Sloman, a philosopher in the U.K., has interacted with me about giving machines emotional abilities. Norm Weinstein, a poet, wrote a fabulous review of my book, with some great insights.

I’ve interacted with theologians at meetings on identity/human dignity. When I contributed a chapter for Hal’s Legacy I got invited to a lot of Hal’s birthday events and I was even speaking at literature departments. I have met people who called themselves cultural critics, who were talking about technology. But they were largely talking about outdated technology. It showed me they really weren’t up with the times. Engineers actually talk informally among themselves quite a bit about social and philosophical issues of technology but before I came to the Media Lab I never realized there were people who focused their careers on such areas.

Even if technology critics are a bit outdated in their illustrations, we should not necessarily dismiss their whole argument. Their main concern is probably not to critique the latest thing but to raise the deepest, long-term, perennial human issues. What is knowledge? What makes life worth living? How shall we understand our tools? How shall we relate to nature?

The Media Lab has a wonderful qualifying exam process for our students. In addition to drilling them on all kinds of technical stuff and on things in their main field, our students prepare and are examined on a paper that often brings larger social, philosophical, and ethical issues into play and other people from around MIT are brought in. This is a time when we faculty get to interact a little more with colleagues in other fields. There’s definitely a sort of “nothing new under the sun” sense to these discussions. Others have already thought of these issues and we benefit from looking at their ideas.

Your bio indicates that you have an interesting life outside the lab — doing risky and exciting things like riding camels, swimming with sharks, and swinging from airplane wings. Do such personal experiences outside the lab have any relation to your work here?

I was a fairly shy and introverted child who lived in my own little world. But one day, when I was going to France to work for the summer, I realized I would be living with people who had no idea who I was. For all they knew I was this gregarious, risk-taking, crazy person or whatever. So I made a list of 100 things that might be fun and exciting to do — like jump out of an airplane, learn how satellites work, and so on. I was just a high school kid and I set out to learn about and do everything I could. I became a real risk taker. Now I’m not going to take foolish risks — like jumping out of an airplane without making sure everything is in good shape. And actually, now that I have children I’m not jumping out of planes because I feel like that’s an unnecessary risk to impose on a child. Risk taking has shifted in my life from physical risks (although I still bicycle around Boston) to more intellectual ones — being willing to buck my colleagues’ assumptions and explore other ways.

Jerry Wiesner, former science advisor to Kennedy and president of MIT, told me shortly before he passed away that the most important thing researchers and faculty can do is take risks and really venture out beyond the incremental refining of what already exists. Let others refine the wagon wheel and make it smoother and stronger. At the MIT Media Lab we should look way beyond and leap ahead and invent the airplane, the jet, or whatever the future could use. Of course it is important to keep refining those wagon wheels. We wouldn’t want to ride in any airplanes right now if people were just constantly inventing the next one and not refining the last one. But at MIT we are challenged to be the ones who take real risks.

How about your reading habits? Do they go far afield or are they pretty focused in technology?

I read all over the place. For example I read religious works by scholars who know more than I do. It helps me get insight into much larger issues than my research requires. I’ve recently been enjoying a religious book by a technical colleague, Donald Knuth’s 3:16. I’ve always been a fan of his technical books and had read a number of those. I had the privilege of getting to know him a little when he visited here last fall.

Does this kind of reading relate in any way to your research — as you think about people for example?

Absolutely. I’m surprised how many in my field don’t think people are that important. Throw out a term like human dignity and they don’t know where to take it — they see similarities among people and machines, but not differences. I’ve been invited to speak at conferences with theologians and people in other fields that I would ordinarily never interact with. They have challenged me to think about human dignity and its value and whether we are conferring something like human dignity and identity on our machines. These are things that don’t arise in my day-to-day research in matrix equations or whatever.

These issues do inform the way that we prioritize what we do. For example, when we were going to study people’s feelings in using computers, we asked all of our potential human subjects if they would mind us putting a camera, microphone, and various new sensors we developed in their environment. A bunch of them objected. Instead of trying hard to convince them why they should do it our way, we listened and then decided to go back and build some different sensors that the subjects would be in charge of. It wasn’t our initial agenda but we realized if we were to really practice what we preached in terms of respecting their feelings we had to go back and build what they wanted. We still operate that way. With each thing we build we ask how it is affirming the users, who they are and what their goals are — as opposed to our traditional engineering approach of “it’s cool! let’s build it and they’ll adjust, and if not they’ll read the manual and we’ll give them courses on it. We’ll sort of force them to figure it out.”

On your interest in religious and spiritual matters, is there any conflict for a person of faith in a scientific community that tends to reject that?

All scientists have faith of various sorts. I used to think I had no such thing and I claimed to be an atheist but I was challenged to realize that actually we all have things we believe in, even if we only believe in science. Even science believes in some materialistic presuppositions, that things are ordered and that things can all be explained by the scientific method. So there’s a lot of faith among my scientific colleagues though they bristle at the idea. Science wants things to be rigorously defended, but I feel that a rigorous defense of the faith I have can be given so that doesn’t pose a problem to me. I feel that people of faith, just like people of science, should always be open and questioning, and willing to admit “gee, maybe I didn’t get that quite right.” In my experience, when I’ve had questions and gone back and checked the sources and found really solid scholarship I’ve come back even stronger in my faith.

You obviously have a very exciting and successful career and you also have a family. How do you balance these commitments?

I think like every working mom who really puts her family first I’m always looking for flexibility in my job situation. I say no to a lot of things. I sometimes have days where I’ve just said no to things all day, which feels kind of crummy at the end of the day. All I accomplished was saying no to everything. One day I decided to start writing big time-consuming trips that I said no to on my calendar. Then when I come to that week on the calendar I see that I could have been in Korea this week, and Illinois and Japan next week if I’d accepted all these invitations. Then I feel really good about saying no, because the time is filled with plenty of other good things. And the invitations keep coming so I guess it hasn’t killed my career.

I have wonderful flexibility with my job. It is rare that I can’t cancel or move things to be home with my child when he’s sick or to be there for something he needs. MIT is wonderful about letting me work any hour of the day (as long as you work all hours of the day). I feel like one of those people who stands up in one of those AA meetings and says “I am an alcoholic even though I haven’t had a drink for ten years.” I am a workaholic even though I think I keep it under control. But it’s an unstable equilibrium; everything is constantly trying to tip it out of balance and I have to pull together every resource I have to keep my values and priorities on top.

Could more affective computers help you in the future with this challenge?

Our machines currently cause us a lot of aggravation and waste a lot of time. The designers are constantly thinking of sixteen cool new features they can add — instead of going back and looking at the sixteen thousand features to find which are the most aggravating to people and which ones could really save people time. I think there’s a basic disregard for how the customer feels. It’s much more exciting to add some new bells and whistles and print that on the box then it is to really fix what’s there.

The old saying “if you can’t measure it, you can’t manage it” is germane. We’re not saying that emotion and affect can be reduced to mere measurement, but by trying to measure certain aspects of it and putting an engineering spin on it I’m hoping that we will have an impact that means people are less stressed and more productive at the end of the day. Less stress translates to greater creativity. Greater positive affect has even been shown to correlate with better decision making, not just greater creativity but reaching the correct decision faster and going beyond it in a more humane way also.

Share Your Thoughts