Ed Lazowska: The Real Information Revolution Is Yet to Come

Ed Lazowska holds the Bill & Melinda Gates chair in computer science in the department of computer science and engineering at the University of Washington. Lazowska received his A.B. from Brown University in 1972 and his Ph.D. from the University of Toronto in 1977. He has been at the University of Washington since that time. He chaired the UW department of computer science and engineering from 1993-2001. Under his leadership, the department enhanced its reputation as one of the top ten computer science research programs in the nation, and received the inaugural University of Washington Brotman Award for Instructional Excellence.

Lazowska’s research and teaching concern the design, implementation, and analysis of high-performance computing and communication systems. He is a member of the National Academy of Engineering, and a fellow of the Association for Computing Machinery, the Institute of Electrical and Electronics Engineers, and the American Association for the Advancement of Science. He was selected to deliver the 1996 University of Washington Annual Faculty Lecture, and to receive the 1998 University of Washington Outstanding Public Service Award.

Lazowska is a member of the board of directors of the Computing Research Association, and recently served on the National Research Council’s Computer Science and Telecommunications Board, and on the NRC study committees on Improving Learning with Information Technology and on Science and Technology for Countering Terrorism. He is a member of the Microsoft Research Technical Advisory Board, serves as a board member or technical advisor to a number of high-tech companies and venture firms, and is a Trustee of Lakeside School, a co-educational independent school in Seattle.

◊ ◊ ◊ ◊ ◊

Ethix: You are a true technologist and you have watched this incredible information technology revolution over the last 25 years and yet you have said that the real information revolution is yet to come. What do you see coming up?

Edward W. Lazowska: This is difficult. There is a long litany of failed prognoses in information technology — Tom Watson saying at most five computers, Ken Olsen saying no one would want a computer in their home, and Bill Gates reputedly saying that 640k should be enough for anyone. Just to state the obvious, over the next five or ten years we are going to see digital devices that we don’t think of as computers everywhere in our lives. Intel calls that pervasive computing. Xerox’s Mark Weiser called it ubiquitous computing. The important thing is that this computing cannot be a pain in the neck. It has to be something that makes our lives better, rather than making our lives more annoying. Today your compact disc player and your cell phone are examples of computer devices that actually make your life better; we don’t think of them as computers.

Well, I would disagree. Cell phones often make life worse. It is a mixed bag at best.

Other people’s cell phones make my life worse. Mine makes my life better.

I agree with that.

But I think that today………

(At this very moment, as if on cue, Al Erisman’s cell phone started ringing and disrupted our conversation until he found it and silenced it).

… as I was saying … today 98 percent of microprocessors go into things other than what you think of as computers and that trend will increase. Automobiles already have dozens of computers in them and they run a lot better than they did when I was learning to drive in the 1960s.

So, where do the processor cycles go? There is no end in sight! There is so much progress yet to be made in things like understanding speech, image processing that understands the state of the user and his or her frustration, and real world capture. The fact is that we have enough storage these days to represent extraordinarily detailed models of the world so how do we capture, manipulate, and display those models? You can come very close now to affording physically and financially enough disc space to save the digital record of your entire life, though we have no way to index, access, and search it. All of these things will consume CPU cycles ad infinitum. We are still in the baby stages of these devices and what they can do for us.

Work is being done on a number of technologies to assist individuals with Alzheimer’s. We put a lot of effort into physical accommodation of seniors but much less into mental accommodation. Suppose that you could determine when someone is wandering around aimlessly — perhaps even determine what they were trying to do based on their patterns and movements.

Or imagine that your own health was monitored in a meaningful, private way and that if something went wrong with your systems, your doctor would actually find out without your doing anything about it.

Imagine that you had home security that actually increased your peace of mind rather than being a pain in the neck as it often is right now. Imagine a non-intrusive security system that actually made people of all ages, particularly seniors, feel comfortable in their homes. Imagine that there was technology that allowed you to communicate with your loved ones better. None of this is rocket science in itself. The challenge is making it a seamless part of the fabric of our lives rather than a pain in the neck.

What is happening with reliability and self-healing systems? When technology doesn’t work all the time, it creates a high level of frustration.

Mike Schroeder made a famous statement that you know you are working in distributed computing environment when you can’t get anything done because of the failure of a computer that you didn’t know existed. We are building these enormously large, enormously complex systems and the management overhead of those systems increases relentlessly. I see this at home. Think about how much time all of us spend in our houses as computer administrators. The question is how we move toward systems that are more reliable and require virtually zero administration.

I think in a non-obvious way Moore’s Law helps us here. As transistor density increases, we can use some of those additional transistors — some of that additional computing power — to increase reliability. A very mundane example: ten or fifteen years ago you never ran a program with array bounds checking enabled — that was only for debugging. You couldn’t afford to have bounds checking on during execution. Now you don’t even think about it. Similarly, for decades, automatic garbage collection of memory was a joke. Real men did dynamic storage allocation. Of course, real men got it wrong, and as a result you were constantly having memory leaks and thus system crashes. These days if you look at Java and C# and .NET and Microsoft’s common language run time, garbage collection is an integral part of that, and memory management errors and crashes are a thing of the past. So there are many ways in which we are going to be able to spend those additional transistors to help create more reliable single systems and collections of systems.

But doesn’t this add to the complexity of the overall system and therefore produce other unreliability? With more going on in the system there is more that could break down.

An analogy, probably too simplistic, is that your body is more complex because it can heal itself. Sometimes those feedback systems go awry. You get scabs or scars where you didn’t want them, or tumors, but fundamentally our systems heal themselves in remarkable ways. Nobody has any idea how to actually do this in computer systems but it is an example of a system that is both more complex but also more reliable.

Researcher Responsibilities for Negative Uses of Technology?

Are researchers responsible for possible misuses or unintended negative impacts of things they create? Edward Tenner’s book, Why Things Bite Back: Technology and The Revenge of Unintended Consequences, catalogs at length the unintended negative consequences of even our best technologies.

We think about that a lot. I don’t think that we are blind to the ethical dilemmas. Every technology has both positive and negative consequences. But blasting us all back to horses and buggies it is not the solution to technology’s ills. One important example today is data mining. Data mining has a number of counter-terrorism applications, some of which involve domestic surveillance. There are lots of important non-malevolent applications of data mining, too. Astronomy these days is in many ways data mining. Think about the Sloan digital sky survey; all of the data is there or will be there in repositories and your competitive advantage is whether you can extract something interesting from that data. Data mining can also detect buying patterns. You are probably happy to see it used to detect credit card fraud. Do we also want it used to detect patterns of motion or behavior around the country?

A number of us have been looking recently at privacy technologies to complement security technologies. There are technological approaches to improve and protect privacy as well as to detect patterns of behavior. I think we need to have both. The world faces a terrorist threat these days that we must tackle for our own survival but at the same time we can’t sacrifice basic individual liberties, which are the foundation of our democracy.

Computer Technology in Business

How would you assess the impact of computer technology on the business world?

A recent article in The New York Times magazine reflected on the dot.com bubble of the late 90s. It argued that many of these folks really believed in what they were doing, with an almost religious fervor. Their dreams and projects did not always pan out as they hoped but they nonetheless drove enormous change in the country. Jeff Bezos recalled recently that in the mid-90s Amazon.com was nothing more than a bunch of servers in a garage in Bellevue. The best future anyone could imagine was that Barnes and Noble would buy it out some day and fly it into the ground so they would not have to compete with it. But last year 2.5 billion dollars worth of books were bought online — an unbelievable change in the country.

Amazon is going through some very important changes even now in developing businesses where they no longer own warehouses full of inventory but act as an intermediary almost like E-bay. This business model takes advantage of the Internet, doesn’t invest in inventory, and seems like it will work.

Amazon.com has put an enormous amount of effort into data interchange and data integration between their front end and these companies’ back ends. A shopper on an Amazon-like site expects to know whether an item is in stock or not and whether their order has been shipped or not. A huge amount of their work has gone into defining specific interfaces and helping merchants improve their inventory management and other computer systems so that they could deal with on-line customers.

I recently bought a Kodak digital camera through Dell that was defective from day one and it has been incredibly frustrating and time-consuming to try to beat my way through the phone menus to get any help. It still isn’t being repaired and at this point I wish I had just bought it at a local store. And I should note that my other purchases at Dell have been great experiences.

Dell has been highly innovative and successful in putting personal computer configuration, manufacture, and delivery online. But there were some mis-steps when Dell started to make it possible for you to purchase goods from other merchants through their website. You have just described an example. Internally, Dell has unbelievable back-end automation. You always know exactly where your Dell computer is on their assembly line. Then Dell started allowing you to order HP printers, Kodak cameras, and other stuff like that. When I ordered an HP color laser printer from Dell a few years ago, they had no clue if it was in stock at the HP warehouse or not, or when it was put on a truck from the HP warehouse, or where the truck was. Customers have a set of expectations, particularly from a Dell or an Amazon, and Dell was not able to fulfill those expectations when they started dealing with third-party merchants. That’s the problem that Amazon has worked so hard to avoid. It is not just a business shift, it is a really significant technology shift.

I also wonder what the statistics are on banking customer loyalty as electronic banking replaces people. After thirty years, I moved all my banking from Bank of America to Washington Mutual because they have branches with real people you can actually talk to — as well as all the conveniences of ATMs — and in addition they are cheaper.

To each his own. I probably haven’t been inside a bank in three years. We have two kids in college, and it’s a real convenience to be able to transfer money to their accounts securely over the web. (Of course, we’re looking forward to the day when we can stop doing this!)

Countering Terrorism With Technology?

You worked recently on a government-sponsored project on how science and technology can counter terrorism. What is the role of computer technology here?

It’s my view that information technology is more essential to terrorism and counter-terrorism than any other technology. First, computer systems are a point of vulnerability. Second, computer systems are a potential source to detect terrorist activities. Third, as communication systems, computer systems and networks are very important. Terrorists thrive on fear, uncertainty, doubt, and misinformation.

Preventing reliable information from being disseminated exacerbates the impact of a terrorist act; facilitating good information sharing undermines these negative impacts.

Fourth, and perhaps most importantly, computer systems now control and monitor every element of our nation’s critical infrastructure: the electric power grid, the air traffic control grid, the telecommunications grid, the financial grid. It would be very hard to do truly catastrophic damage to the Internet because the individual components are not that expensive. But one way to do catastrophic damage to the power grid is by attacking the control and monitoring computers. Our project focused on the vulnerability of real time control systems and the need to make those systems more secure. The pervasive and positive role that computers play in every aspect of our lives and every aspect of our economy creates vulnerabilities as well.

The Researcher’s Social and Cultural Context

What do you do to add texture and background to your understanding of the place of computers and information technology in human life, history, culture, and so on?

Not enough! I read the paper. I try to be an active citizen of the city and country in which I live. I try to have active interchange with the business community and the political leadership in this region. Certainly I read but I don’t have any silver bullet.

I have a special concern to bring attention to the fact that the nation is not investing sufficiently in computing research. The importance of this field has grown enormously over the past few decades but the level of investment has not grown proportionately. The federal investment portfolio is becoming tremendously unbalanced. That is not an argument for shifting resources but rather for adding resources. So much of the future progress in the biomedical sciences, for example, depends on progress in engineering and the physical sciences.

Here in the State of Washington, we are failing to make the choices that will leave our kids the kind of region that they need. Our transportation system is a mess. Our higher education system is a mess as well. We rank 48th in the nation in public bachelor’s capacity per capita. Only two states are behind us. But we rank 5th in the nation in employment of people with recent bachelor’s degrees in science and engineering and 6th in the nation in employment of recent master’s degrees in science and engineering. Our economy is creating jobs for which our education system is not preparing our kids. It’s an enormous issue for this state and we don’t have any discernible plan. So that’s where I have been trying to put my efforts: waking people up to the realization that we are sacrificing the next generation in order to make our own lives easier in the short term.

Why the Delays in Deploying New Technologies?

On a visit to a doctor recently I noticed shelves of paper folders containing medical records. Even 1980s technology could help that office a great deal. What is the reason for the long delays between the availability of technology and its use in such obvious situations?

There has been enormous progress in the use of medical technology for diagnosis and treatment. I think the folders are becoming much less common in major research medical centers than they used to be.

When I broke a shoulder skiing and came to the UW Medical Center I had to fill out seven sheets of paper which included my Social Security number five times.

I don’t know why the basic business is still so forms-based. You would think there would be cost advantages using computers. Maybe the up-front investment gets killed by cost containment mandates. Or maybe it is the classic “too busy building the house to build any tools.”

The company doesn’t want to change its ways and ignores the issue until a competitor forces their hand.

There are also enormous privacy issues related to medical records. One thing that protected our privacy in the past was simply that it is so laborious to assemble a complete picture of us when it resides in lots of little file folders all over the place. In some sense it is a blessing that we have not gone completely to on-line record-keeping since we have not yet developed and deployed privacy technologies and policies that give us appropriate control over our personal information.

Medical records are just an illustration of the lag issue I raise. A small construction firm could also use even a ten-year-old technology to manage a project and save enormous amounts of time, but they prefer to do it the old way. Boeing had departments that could have been helped enormously but some kind of inertia and resistance to change decides things. Good new technology is rejected by the “immune system” of the company.

What is on the critical path of a particular company? A basic tenet of the health systems in America is that the patient’s time is worth zero. Having you fill out the same forms seven times with your Social Security number doesn’t bother them at all. It bothers you like crazy because you value your time.

Twenty years ago a graphics guy here, Tony DeRose, spent a half-year at Boeing, and it opened his eyes totally. He was a world-class graphics and computer aided geometric design guy, but he realized that while the research problems he was working on were really interesting and intellectually important they were not on the critical path to computer-aided geometric design as practiced by the Boeings and GMs of the world. The technology they were using was a decade behind but he discovered, after a week or two on the job, that in terms of the bottleneck tasks of designing an airplane, he could offer a factor of 10 improvement on something that represented to them one percent of the problem.

So one aspect of the lag time in adopting technology is certainly some form of intransigence, the immune system you describe. But another factor is that they have a notion of what their critical path items are and what their costs are. Almost every business is competitive and if people can actually find a way to cut their costs, they will. But something that looms large to me and you may not loom large to them.

Telecommunications Lags: Why?

On the telecommunications side, an incredible growth in bandwidth has actually caused a glut in the telecommunications industry. Yet, on the computing side, the tremendous growth of computing power seems to have been absorbed by the users. Why is there this difference?

We still have a “last-mile” problem in this country and around the world and that, in some sense, keeps us from getting to the glut. We have a growth in backbone bandwidth and somehow you have to get to that backbone.

Wasn’t a company like Terabeam going to address the last-mile issue with a wireless connection to the last-mile?

Yes, but Terabeam’s free-space optics solution, in its present incarnation, is point-to-point. That means it’s a tremendously cost-effective way to hook up a business that’s off the fiber right-of-way, or whose fiber has been destroyed by a disaster such as 9/11, but not for hooking up a neighborhood of homes. Terabeam also has a new RF technology which is one-to-many rather than point-to-point. There might be an opportunity for new business models here as well as new technology. A number of cities, including Seattle and Portland, have community wireless networks, as an example. An interesting question is whether you could have networks that grow organically and provide at least say 10 megabit or 50 megabit connectivity to a big pipe for a cluster of people. There are apartment buildings in New York with essentially their own ISPs. There is lots of room for both technological and business innovation here. I cannot imagine the technological solution being anything other than wireless.

We do have a serious chicken-and-egg problem. Very few consumers, relatively, have broadband access. Thus, there isn’t much broadband content. Web sites and media services are stuck with the least common denominator. Today the majority of users have 56-kilobit modems that actually deliver 30 kilobits or something like that, so this is what web sites are geared for.

Internet II

Internet II, the next generation of Internet, will be a great leap forward over the standard Internet that we now know. When will this reach the public? And will it have a comparable impact to the arrival of the first Internet in the early and mid-90s?

I believe we are at the stage now in the Internet where we are going to see progressive enhancements rather than another great leap. The Internet began in the 1960s and doubled and doubled and doubled every few months, below everybody’s radar screen. Then suddenly: Boom! The Internet II technologies — vastly greater bandwidth, the ability to control quality of service, sets of new services — will be much more progressive and will make their way into the commodity Internet over time, not in one great step. Part of this is due to the last-mile problem once again. Businesses and universities have pretty good connectivity these days — and individuals have pretty lousy connectivity. Since people are the market for cool services, that means cool services don’t really exist because you have to have a way to get it to people.

Educational Technology

Your praise of people-to-people teaching and mentoring seems kind of ironic in view of the ways information technologies try to replace human with virtual class experiences, immediate with distance education, human operators with phone mail menus, and so on. Is information technology a friend or foe to the kind of quality education you have described?

What the technology does is allow both faculty and students to spend their time where it matters most. Standing up in front of a class of 250 freshmen lecturing about introductory programming is not a very good use of their time or mine. I like the quip that “a lecture is a way to get material from the instructor’s notes into the students’ notes without it passing through either brain.” The question is how to spend more time in a mentoring relationship that actually pays off. Can information technology provide a more efficient and effective way to teach and learn the routine material — the syntax of a programming language, the mechanics of a development environment? Could that free up a lot of time for the sort of mentoring that I enjoyed as an undergraduate? This would be a home run for everyone.

No doubt there are appropriate uses of various learning technologies but aren’t there studies that suggest a significant difference in the experience of sitting in a room full of people with a live lecturer — and sitting before a screen? Do we know what the trade-off is?

The truth is we don’t know. For a hundred years people have claimed that this or that technology would revolutionize education, particularly in K-12. Thomas Edison said it for movies, and it was said later for radio and for television. Each of those predictions was a tremendous flop. And today, I don’t see much positive benefit to student learning from all the money we’ve spent putting computers in the K-12 classroom — mastering math and science and English and history is the issue, not mastering PowerPoint!

So that’s the question. Can information technology can help teachers teach and students learn history and English and math and science? There are clear areas where it can. There is no reason that simulation shouldn’t play as big a role in education as it does in the practice of science. A lot of work in solving flow equations for building aircraft that used to be done in wind tunnels is now done with computer simulations. A lot of chemistry is done by simulation. There is no reason that simulation and visualization can’t play a constructive role in education to take just one example.

Another possible advantage is that software can be adapted to particular learning styles and background knowledge. We know that one-on-one adaptive tutoring really works in education, but it’s not scalable — not enough tutors, not enough dollars. Supplementing great teachers with adaptive computer tutoring systems could be a win. It would not be as good as having your own private tutor but unfortunately we cannot provide a highly skilled private tutor for every kid today.

There has also been a lot of progress in the past decade or two in understanding how people learn. If we could couple educational technology advances with advances in the learning sciences, we would really be on the way. An analogy might be what happened about fifteen or twenty years ago when people like Lee Hood brought biological science together with the technology of the gene sequencing machine. That brought a huge transformation to the biological sciences. There is an opportunity for that kind of coupling in education. The fact is that K-12 education across this country is in deep trouble. I don’t think anyone has a silver bullet but technology could play a role in getting us out of this pickle.

Who Sets the Research Agenda?

What drives the research agenda for you, your colleagues, and your graduate students? How do you decide what to work on?

I think of us as doing fundamental research that is strategically motivated. “Fundamental” means we are trying to look ten years out. We are not trying to do something that is going to pay off in a couple of years. “Strategically motivated” means there are well-motivated problems that we are trying to solve. That doesn’t mean we necessarily understand where the innovations are going to pay off, or that we are right. The history of computing shows that the unanticipated benefits of an advance often exceed the anticipated benefits. But we’re driving towards some goal.

That’s true in other areas too. Teflon, for example, came out of the space program.

Yeah, and Tang — what would morning be without it? In computer science, networking was created for sharing large-scale machines back in the 1970s. Nobody anticipated e-mail. That wasn’t part of what folks were trying to do. There are tons of examples like that. So we try to think about problems that it would be great to solve, understanding that, firstly, sometimes we won’t succeed — that’s the nature of research — and other times, we’ll succeed but the solution may have unanticipated benefits that are even greater than the ones we anticipated.

What are some particularly exciting projects going on in your department today?

Let me give you two examples — but there are many more. Chris Diorio is a young UW faculty member who studies “neurally-inspired computing.” Think about your brain. It only consumes about 50 watts of power. Its cycle time is about 100 Hz. It’s great at recognizing images and understanding language, and lousy at multiplying big numbers together. It can sustain significant damage and keep on working. Very different from a digital microprocessor!

Chris’s research, begun when he was a graduate student at Caltech working with the legendary Carver Mead, involves trying to understand how Mother Nature computes, and how to build similar computing structures in silicon. At UW, Chris’s work has taken two directions. First, he’s working with world-class zoologists and marine biologists to instrument moths and sea slugs in order to understand how they “compute” in order to control their motion — a critical question for the biological scientists as well as for Chris. Second, he and Carver have founded a Seattle startup company, Impinj, that is using the early results Chris’s research to build self-stabilizing analog circuitry in a standard silicon CMOS process, with the potential for dramatic power savings and performance improvements.

Second, a number of people here and around the world are working on mobile robotics. Several years ago the robotics research community got together and asked what characteristics next-generation robots would need to have. They decided that there would be teams of autonomous robots with a variety of specialized functions and capabilities, communicating and cooperating in real-world environments that included teams of adversaries. Then they discussed what the sexiest imaginable application might be that had these characteristics. They came up with “robot soccer,” and every August for four years now there has been an international tournament called RoboCup. It was held in Seattle two summers ago and in Japan just this last summer.

There are about eight different leagues. The league in which UW competes involves students teaching those cute little Sony AIBO dog robots how to play soccer. There are teams of five dogs on a regulation field. There are rules. The participants — the dogs — are completely autonomous, programmed to be goalies or wing men or whatever. The optics and the processor are on board. The program is downloaded into them. The objective here is not really to AIBO dogs to play soccer — but to invent the science that will underpin a new generation of robots that might be useful in your home, or in 9-11 type disaster situations where you couldn’t use people.

To what extent does business or government control your research agenda?

Computer science research costs money so by and large one does research where the money is. However, multiple government agencies, with a wide variety of “agendas,” support computing research, so while the overall level of investment is still far too low, the diversity of interests is quite high. If you have a track record of success and a promising idea, you are likely to be able to obtain at least modest support.

What’s the role of companies?

Companies, in general, have products to design and ship, and the short-term competitive pressures on them become more severe all the time. The extent to which companies can invest in anything more than one product cycle out is decreasing. Further, in the electronics and computer business, the length of a product cycle is shrinking.

Thirty years ago in information technology, three companies, IBM, AT&T, and Xerox, generated a large part of the gross domestic product, and each of them had a great research lab that was focused more than one product cycle out. Over the past thirty years, the size of the “information technology GDP pie” has gotten far larger. But of all those companies that have blossomed to grow that pie, such as Dell, Oracle, and Cisco, the majority of them invest little in looking more than one product cycle out. Cisco’s R&D has become M&A: mostly they buy stuff, rather than inventing it.

Microsoft has been one really notable exception to this trend. Since they created Microsoft Research in 1991, Microsoft has been doing very important research that’s looking out 5, 10, 15 years, and they deserve enormous credit for that. Microsoft’s fundamental research consumes only a few percent of their total $5.2 billion dollar R&D budget, but that still amounts to several hundred million dollars a year just on Microsoft Research, which is real money. Cisco, Oracle and Dell are not doing that. Intel is starting to do it now in a serious way.

So the fundamental role of companies is to take ideas that have been placed in the “idea storehouse” and take them out of that storehouse and integrate them together into great products that will succeed — an exceedingly difficult and risky undertaking. Now, how are ideas getting into that storehouse? The record is pretty clear that, by and large, universities and, to a far lesser extent, companies place ideas there, and companies integrate these ideas and bring new products to market. Truly innovative products — those that represent entirely new categories — are more often brought to market by venture-funded startups than by mature companies. Those start-ups either fail or succeed; if they succeed they either remain independent or they get acquired. That’s the way it works.

We have an ecology that works really well and consists of existing companies, universities, the federal research enterprise, and the VCs. The federal government has agencies like the Defense Advance Research Project Agency (DARPA), whose job is to make sure that our military has the technology required to be competitive, the National Science Foundation (NSF), whose job is to support the broad basis of science and engineering, and the National Institutes of Health (NIH), whose job is to do biomedical research. Having NSF and DARPA together in support of our computer science field has made an enormous difference. DARPA has invested with tremendous vision in those areas that are important to military preparedness. NSF has covered the rest of the waterfront. So it is the government that takes the longer-range view with a broad portfolio of things.

What about research projects that would address some of the social or third world issues?

A combination of NSF and private foundation funds would usually support work like that. The NSF has an enormous investment in education, for example. In terms of third world activities, NSF’s digital library initiative has obvious relevance — making the world’s knowledge universally available. We have a student who spent the last two years in India working on information technologies for the third world. Here in Seattle, PATH, the Program for Appropriate Technology for Health, is an organization through which the Gates Foundation outsources its vaccination program, but in addition, for 25 years PATH has been about re-engineering first world health technologies for third world economies. Their funding comes from the World Health Organization and the Agency for International Development. They have re-engineered a pregnancy test that costs about $12 here into a 12 cent test for sub-Saharan Africa. They re-engineered the old air-blaster vaccination gun used during the Korean War to create a practical, safe, inexpensive way to vaccinate whole villages full of people without propagating AIDS downstream. They create financial incentives for drug companies to do the R&D on vaccines for the third world. Anyway, it’s a great example.

So, I think that money does exist for all sorts of things. The best technology for addressing the digital divide is the technology that works for everybody. That is, good technology works for people who can afford to pay for it — -and it works for those who cannot afford to pay for it. So, if you can find a sponsor motivated to fund a project for one reason but the results of the project can be re-targeted to answer another need, that is fantastic. All of this smaller, lighter, cheaper and more robust computer technology has equal value in the first world and the third world.

Postscript: Why the University Environment?

Choosing a career in the university instead of industry is all about how you feel that you personally best achieve impact. If you do something exciting in a computing company like Microsoft, it can wind up on millions of desktops. If you create something great at Boeing, millions of people can be flying on it.

At a university, it’s all about producing new generations of people. My undergraduate mentor at Brown University, Andy van Dam, plucked me out of an introductory computer science course, got me working 80 hours a week on research projects, and totally changed my life, and many other lives. There was a time, for example, when the computer science department chairs at Washington, Maryland, Princeton, MIT, and Waterloo were all his former undergrads, as were Brad Silverberg, who headed Windows 95 and Internet Explorer at Microsoft, Andy Hertzfeld, who did about 1/3 of the original Mac operating system, and John Crawford, who oversaw the whole X86 architecture family at Intel. This shows the impact of a university professor investing in people. That’s why I got into the job.

The other reason has to do with innovation. Industry, academia, and government are essential partners in driving high-tech innovation. All three legs of the stool are needed. Almost all of the information technologies on which we rely can trace a significant part of their lineage back to federally-funded, university-based research programs. Universities may not have directly created many e-commerce companies, but all of those companies rely essentially on the internet, web browsers, public key cryptography, and back-end parallel and relational data base systems. The university lineage of all of those technologies is absolutely clear.