Feedback

Dan Ling: Researching Tomorrow’s Technology at Microsoft

Dan Ling is vice president of research at the Microsoft Corporation in Redmond, Washington. Microsoft Research is dedicated to basic and applied research in computer science. Its goal is to develop new technologies that will be key elements in the future of computing, including the creation of Microsoft’s .NET platform.

After receiving his bachelor’s, master’s, and doctoral degrees in electrical engineering from Stanford University, Ling went to the IBM Thomas J. Watson Research Center, eventually becoming senior manager. He was awarded an IBM Outstanding Innovation Award in 1986 for his co-invention of the video-RAM. Ling managed IBM research on advanced microsystems based on 370 and RISC architectures, and the associated systems and VLSI design tools. One of his departments initiated work on a novel machine architecture, organization, and design that led to the IBM RS/6000 workstations. He subsequently managed the veridical user environments department that conducted research in virtual worlds technology, user interfaces, and data visualization.

In 1992, Ling joined Microsoft Research as senior researcher in the area of user interfaces and computer graphics and was one of the founders of the laboratory. He served as director of the Redmond laboratory from 1995 until his promotion to vice president in April 2000. During this time, the Redmond laboratory grew over threefold to include research in new areas such as networking, data mining, computer mediated collaboration, streaming media, devices, and new development tools.

Ling holds seven patents and is the author of numerous publications. He is a member of the Institute of Electrical and Electronics Engineers, the American Physical Society, and the Association for Computing Machinery. He also serves on advisory committees for the University of Washington and the University of California at Berkeley.

◊ ◊ ◊ ◊ ◊

Ethix: How did you get into technology?

Dan Ling: My mother and my grandfather were physicians and there was a lot of pressure when I was growing up to become a physician. But every time I would get a shot when I was little I would hide under the table and refuse to come out. This did not bode well for my becoming a physician.

As an undergraduate at Stanford I took the introductory programming class and fell in love with computers. Writing software was magical. Normally, to turn a thought into something real you must hammer and saw and cut and screw. In computing you have a thought and it becomes something real and there’s nothing physical in between. It’s just amazing.

During my second year at Stanford, the only time we could program the computers was at three o’clock in the morning. I got tired and said “this is for the birds — I’m not becoming a computer scientist.” I changed my major to physics. But when I did my Ph.D. thesis in solid state physics, the lab bought a computer and I volunteered to program it. With another student I developed the software for all of the lab experiments. That’s how I got back into computing. When I graduated and went to IBM I decided to move away from physics.

Software is so delightful; why is it also so difficult? People have tried to find the silver bullet in perfecting software but it seems to defy management somehow.

Software is extraordinarily complicated. If you think about it in terms of moving parts and statements and branches and different conditions where different things happen, that is one measure of complexity. A lot of research is yet to be done to understand how to build systems with greater degrees of reliability, predictable performance, and better security.

It’s frustrating that the research has not seemed to have much value. Where is the breakthrough going to come from?

I have no idea, to be perfectly honest. In the research community one of the disadvantages has been access only to relatively small artifacts where these problems are much more tractable. The really difficult problems arise in very large, complex systems. Researchers need access to some of these larger systems to have a more realistic base on which to work.

Could you describe for us what your research lab does?

We have about 300 people working on a wide range of projects — technologies we think will be fundamental to computing and change the way people use their computers. For example, Michael Freedman, a well known topologist, has found an interesting analogy between a topological model and a quantum computational model. He looks for physical systems that can do quantum computing and be much more robust to errors. We also have people thinking about how computers can improve speech recognition, not just for simple dictation, but so you could, for example, telephone your machine and ask it to do something.

Bill Gates once commented that speech recognition will be where he wants it when “wreck a nice beach” and “recognize speech” are distinguishable. How are we doing on that one?

It is almost impossible to distinguish between those two sounds without context. We need to build a system that has some understanding of the context and changes the probabilities of co-occurrences of words depending on the context. Within a limited domain one can do quite well with things like that.

Give us a few other examples of things that you’re working on that may change the future of the way people work and live.

Another of our projects is looking at how technology can help people communicate and collaborate. At its foundation, this is what education is about. Our educational system has not changed much for a long time and there are good reasons to believe that technology can help.

The idea of getting all of our education up front in the first eighteen or twenty-two years seems rather outdated. We need to provide people with real life-long learning over the entire course of their career as they change jobs or career goals, and as technology changes the business environment. We also need more flexibility about where one provides education. People can’t necessarily travel at a fixed time to a fixed location so education must be available at work or home or while looking after the kids, and so on.

Is this kind of research internally funded by Microsoft or are you partnering with others who might want eventually to sell these educational products?

Our educational research is funded internally. Microsoft Research is building prototype systems which we then deploy in our internal training classes to see how people use the systems and what they like and don’t like. We also have a collaborative “I-campus” project with MIT exploring how technology might change the way that MIT teaches various courses.

What might such change look like?

Distance education immediately brings lots of challenges to mind. For example, how does the lecturer know what’s going on with the students? Are they paying attention and interested? Are they falling behind or asleep? Students also learn a lot by being part of a community. They learn from each other as well as from the lecturer. How might technology be used to enhance those elements or even improve on the traditional lecture format?

We’ve been experimenting with a system where students will be able to easily pose questions to the lecturer — who will then see all the questions that various students have asked and choose to answer them at convenient points. The lecturer could also easily post requests for feedback — like “am I going too fast?” “too slow?” “Is this point clear?” — and rapidly see the tally of the votes coming back from the students.

We’re also trying to get some “presence” feedback so that the lecturer can see roughly how many people are out there, who is participating most actively, and who is hanging in the background. We’re doing this with a constellation of little pictures and as students participate more and more actively their pictures get bigger. If they don’t participate their pictures recede into the distance. This provides a sense of how many people are out there and how many are actively participating.

This must be in real time then?

Yes. One of the things that we’re doing post-lecture that might even be an improvement over a traditional lecture is the idea of annotations. Normally, when a lecture is given and recorded, it’s over and done, but with our annotation concept, students can post comments and questions as annotations to different points in the lecture. Either the lecturer or other students can respond further to those questions. In this way an electronic or digital stored version of a lecture becomes a living, growing, organic document that gets richer over time — rather than a fleeting thing that happens once.

How do you allow for conversation among students as well as with the teacher?

We can easily implement a chat or an instant messaging system among groups of people. We deployed part of this at Bill Gates’s annual CEO conference. The CEOs could actually chat among themselves or post questions. It was really fun and useful, even during a live lecture where everybody was physically present.
Writing software was magical. Normally, to turn a thought into something real you must hammer and saw and cut and screw. In computing you have a thought and it becomes something real and there’s nothing physical in between. It’s just amazing.

In this educational research do you read a lot of cognitive science and educational theory as well as computer science?

In addition to our computer scientists and programmers, we have a psychologist who is interested in collaborative issues, how people use technology in their work context. There is also a sociologist who is interested in the social dynamics — what it takes to have a good electronic community where good behavior is rewarded and bad behavior is discouraged, where people are rewarded for leadership and for contributing to the community.

Do you ever consider studies by those who are critical or negative about technologizing education? For example, are you familiar with Neil Postman’s books such as The End of Education and Technopoly: The surrender of culture to technology?

I’m only slightly familiar with Postman’s work, but we do try to bring in a variety of viewpoints. Institutions often have an insular view of the world and desire only to hear the good news and not the criticism. We try to break that pattern. For example, we’ve brought in people who have advocated the open source model for software development. This is not our model, but we want to hear why they think it’s a positive thing.

What are some other examples of technology that you’re working on in the lab that point to the future of work?

One thing we are trying to address is the fact that software consumes a lot of human attention — it’s there on the screen and every piece of software, and even the most trivial error message, commands your attention. But human attention, unfortunately, does not follow the trajectory of Moore’s Law! It is a precious and limited resource. Could we find ways to require it much more sparingly?

Another issue is that all of us are now inundated and overwhelmed with incredible amounts of information such as e-mail, spam, voice mail, stock alerts, and news alerts. Our project is looking at ways of assigning importance or priority to what comes at you. For example, I use my calendar fairly extensively so my machine actually knows that I’m in this meeting right now and not staring at my monitor. If something really important came in, like a phone call from the family, I would want to be buzzed on my cell phone, but I don’t want random messages coming through bothering me. Some messages are urgent only for a very short period of time. If I’m about to run off and catch my plane and my flight is delayed — that information has high value but only for a fairly short period of time. By the time I’m at the airport, it’s too late for it to be helpful. So, there are useful things that could result from this notion of importance and urgency filtering the information flooding at you.

Will this research result ultimately in new kinds of products?

One could think of it as a new set of services on top of today’s products — services that a good personal assistant might have provided.

Do you see any danger that these new services might control your affairs to the extent of making you their servant?

Our educational system has not changed much for a long time and there are good reasons to believe that technology can help. The idea of getting all of our education up front in the first eighteen or twenty-two years seems rather outdated.
In the history of technology there are lots of unintended and unpredictable consequences for every invention. Obviously, in the system I just described, there are sensitive privacy issues. If the computer knows where I am and what I’m doing and I would rather other people didn’t know, then the system must assure that such information is kept private. In this case I don’t think we’re endowing the computer with so much power that it could to take over our life. This is not like some of the scenarios that Bill Joy has mentioned.

How do you respond to Bill Joy’s pessimistic view of the technological future?

There are certainly lots of dangers. Genetics is a particularly dangerous area the industry has entered. Cloning, research with fetal stem cells, and growing replacement organs — these are very, very tricky moral issues. On the information technology side as well, there will be issues emerging over time. What I don’t quite buy is how quickly Bill Joy thinks a crisis is going to arrive. I don’t see anything very dramatic developing on the information technology side, no sort of “end of the species” thing, simply because our abilities to build even fairly rudimentary pieces of software are really quite limited right now. Think about how many mips we have today compared to even a simple earthworm. We still can’t do the most fundamental things. The things that nature does easiest, we do the worst. The things that nature doesn’t do too well, like adding lists of numbers, we do very well. Pattern recognition and learning are still at a very awkward stage in software development.

Does a technology creating company have any responsibility for possible threats to privacy in its products, or other challenges like that? Is that simply the user’s problem to deal with?

Companies definitely have a responsibility. Government is also insisting on this. The European Union has fairly strict laws governing privacy and related issues. We are approaching it in terms of knowledge and consent. The consumer must know when information is being collected, where it’s going to be stored, and how it’s going to be used. The consumer must consent to each step. That’s the sort of framework we’re thinking about and then trying to come up with appropriate user interface.

These are issues that society needs to debate. Exactly what levels of privacy are important? Quite independently of the internet and the web, we all use credit cards and these records are captured, sold, and resold. One of the problems is that our legal and political systems react on a time scale much slower than the pace at which technology is moving forward. Sometimes in a rush to regulate and to pass laws mistakes are made which have their own long-lasting consequences.

David Brin has argued that historically people didn’t have all that much privacy and that, practically speaking, worse things happen with too much privacy than with too much transparency.

In traditional villages everybody knew everything.

Consent may be illusory anyway if you really can’t know what is going to happen with your information. You can say we’re going to sell the mailing list or not, but what then happens with that information is beyond control.

We were having a meeting just yesterday to talk about privacy. Microsoft now has a director of corporate privacy, a central person to coordinate initiatives around the company and make sure they are consistent. It was pointed out that with medical records, for example, encryption only protects them as long as they are unused. As soon as you give somebody permission to look at your record, it can be printed, copied, and distributed and you don’t really know what happens to it. Encryption is only one step to solving the problem. Some researchers are thinking of ways of fingerprinting information so that after it moves you can figure out where it came from — but that’s very difficult to do with text. So, even with a very strong technology like encryption there are still privacy concerns.

How does the research agenda get set in a place like this? Is it driven by the curiosity of the scientist, the quest to find new knowledge? By the business and marketing side?

Somewhere in the middle. Since we are an industrial research lab it is important for us to work on things that have a positive impact on Microsoft products. On the other hand, we are not tied to or funded by any particular product strategy or group in the company. Recently Microsoft embarked on a new strategy which we call .NET [“dot net”]. Microsoft Research was there to help define .NET because we bring a lot of information about what’s possible, what new technologies are in the horizon, and so on. So, we help set strategy for the company.

We’re also working with the product groups to take research ideas and incorporate them into Microsoft’s products. Where do those ideas come from? Many come from the bottom up, from the minds, expertise, and interests of our researchers. The external research and product communities, and the activities among start-ups and the venture capital world also influence us. The amount of stuff going on in technology outside of Microsoft is tremendous. All of this, together with Microsoft’s business strategies, influences the particular directions of our research.

Does Microsoft have a corporate mission to improve people’s lives that determines or affects your agenda in any way?

Before the PC revolution, computers were controlled by information technology organizations; mainframes and applications grew at a fairly slow rate. The software that sparked the PC revolution — the word processor, the spreadsheet, tools for drawing and diagramming or for personal finance — all grew out of trying to make people’s lives more productive and to eliminate drudgery. It was very democratizing in that it aimed to give individuals control of their own environment and their own machines, to set up and install new applications when they wanted them. Individuals could now do what they wanted. That spirit has very much been behind the kind of company like Microsoft.

Bill Gates is such powerful personality and such a genius. To what extent does he get personally involved in Microsoft research?

We have periodic meetings with Bill where he reviews projects, makes suggestions, and gives us new ideas for various projects. Also, twice a year, Bill does “think weeks” where he actually solicits documents, prototype software, and so on, and then goes off to read, look at, play with, and think about this stuff. Afterward he provides a lot of commentary back and that turns out to be a very interesting interaction.

Microsoft is reputed to be a very competitive, critical, sometimes even brutal, place to work according to many reports. But doesn’t your research require and even thrive on teamwork and cooperation? How do competition and cooperation work here?

Institutions often have an insular view of the world and desire only to hear the good news and not the criticism. We try to break that pattern.
Both research and product development require a lot of teamwork. Things are so complicated that a group of people not working in sync will have a lot of trouble. The technology world has had a very intense culture of working hard and being very competitive. In research groups like mine, we remind people that it’s important to collaborate, especially in interdisciplinary research, which is so important and yet so difficult. It’s amazing how each sub-discipline has evolved its own terminology and way of thinking and working. My role is often to help these different sub-cultures communicate.

What are the major ethical concerns that arise in running a research lab?

I’m not so sure the research lab has different ethical requirements or standards than normal everyday life. Intellectual honesty is something that’s a very important part of doing good research. How you deal with your colleagues is fundamental.

When people are always connected so they can work from home, on vacation, all the time, how do we set boundaries and maintain a space for private, non-work life?

Those boundaries are important. Companies must recognize that people need some balance between their work life and their “life-life.” Even in the relatively short amount of time I’ve been at Microsoft, we have developed lots of programs to encourage people to set some boundaries. For example, the senior technical staff gets eight weeks of sabbatical every seven years to go off and recharge, try something new, and not worry about work for a while. We’ve also tried to increase the amount of time that people are able to take for parental or child care and that sort of thing.
Human attention, unfortunately, does not follow the trajectory of Moore’s Law! It is a precious and limited resource. Could we find ways to require it much more sparingly?
I think the company reflects the average age of its people in some ways. When a lot of these new technology companies get started the employees are often fresh out of college, just moved to the area, and don’t know anybody in town. Your colleagues become your friends. You play basketball in the hallway. It really isn’t that clear where work ends and play begins.

When people work 110 hours a week, that may include shooting some hoops?

Exactly. Obviously 110 hours a week is quite an exaggeration, but yes, a work week includes pizza nights, movies, video games, and basketball in the hallways. They work very hard, for sure, but it’s not so clear exactly how many minutes were spent doing “work-work” or other things. As the average employee age has gone up, people have families and children and outside commitments and the company has tried to recognize that fact.

Still, it is interesting that back in the 50s and 60s our vision of the year 2000 was always that technology would ease our lives. A four-day work week was something that everybody was sure would happen by now. But here we are with everybody working far more hours than I remember my father ever working and I don’t think anybody expected that.

At a Microsoft forum a couple years ago someone asked about the benefit of some new technology. When the response was “more leisure time” everyone burst out laughing.

Once upon a time, we really believed that, but I wonder how much of our intensity is a result of technology. Globalization and other forces must play into this as well.

Share Your Thoughts