Vinton G. Cerf: 'A Father of the Internet' Looks Ahead to New Technology

Vinton G. Cerf is senior vice president of Technology Strategy for MCI. In this role, Cerf is responsible for helping to guide corporate strategy development from the technical perspective. From 1994 to 2003, he served as senior vice president of architecture and technology, moving to a strategic role in mid-2003.

Widely known as a “Father of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In December 1997, President Bill Clinton presented the U.S. National Medal of Technology to Cerf and his colleague, Robert E. Kahn, for founding and developing the Internet.

Prior to rejoining MCI in 1994, Cerf was vice president of the Corporation for National Research Initiatives (CNRI). As vice president of MCI Digital Information Services from 1982 to1986, he led the engineering of MCI Mail, the first commercial email service to be connected to the Internet.

During his tenure from 1976 to 1982 with the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA), Cerf played a key role leading the development of Internet and Internet-related data packet and security technologies.

He serves as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN). Cerf served as founding president of the Internet Society from 1992 to1995 and in 1999 served a term as chairman of the board.

Cerf is a recipient of numerous awards and commendations in connection with his work on the Internet. In December 1994, People magazine identified Cerf as one of that year’s “25 Most Intriguing People.” He also holds an appointment as distinguished visiting scientist at the Jet Propulsion Laboratory where he is working on the design of an interplanetary Internet. Cerf holds a bachelor of science degree in mathematics from Stanford University and master of science and Ph.D. degrees in computer science from UCLA. He also holds 11 honorary doctoral degrees from universities in six different countries.

◊ ◊ ◊ ◊ ◊

Ethix: You are often referred to as the “Father of the Internet,” so …

Vinton G. Cerf: You and I both know that is a misnomer. We need to be very conscious of the fact that the Internet is a huge collaborative enterprise.

Would it be better to say that you were involved in key foundational technologies that gave rise to the Internet as we now know it?

That is true. My colleague Robert Kahn and I developed the early protocols for TCP/IP (Transmission Control Protocol/Internet Protocol) that became the foundation for the Internet. The original motivation was to address a military requirement. The predecessor network, the ARPANET, had demonstrated the feasibility of hooking diverse computers with different operating systems using a standard set of protocols for communicating across the packet switched network. Beginning in 1969, this was an early demonstration that you could take really different operating systems and computers of varying bit sizes, and get them working together without using some form of pairwise translation between each distinct pair of computers. That was an important milestone because the Defense Department wanted to figure out whether it was useful and feasible to do that.

At this time all of the computer vendors had their own proprietary networking capability for linking their own computers: IBM had SNA, Digital Equipment had DecNet, Hewlett Packard had DS, etc. They did this to lock customers on their own computers, but this was not satisfactory to the Defense Department. They did not want to be backed into the corner having to buy everything from one vendor. So when Bob and I started working on the basic Internet protocols in 1973, we also wanted to create a solution that could credibly be proposed as an international standard. We also wanted to be able to interconnect an unlimited number of packet networks using radio, satellite and wireline transmission, and packet switching. The ARPANET host protocols were closely dependent on the underlying network design. We wanted to break away from that dependence and produce network independent host-level protocols that would work across an unspecified number of packet networks.

The original motivation [for the internet] was to address a military requirement.

In May 1974, we published the first paper describing the TCP protocol. I was surprised to see that a copy of this paper recently sold for thousands of dollars at an auction by a well-known auction house. My wife and I are looking for other copies in our back files! After several iterations of design and multiple implementation tests, we ultimately ended up around 1978 or so with the version you are running today with the TCP and IP split apart to two different layers. It was not until 1983 that this was rolled out in a serious way for everyone to use.

Was this immediately embraced by the user community?

No. People were comfortable with the limited capability they had. But DARPA basically told the prospective users to switch to the new capability or they would not get research funding for the next year—a very strong motivation! So, we basically pushed it down everybody’s throat in January 1983. But then there ensued a 10-year period of fighting with the Open Systems Interconnection (OSI) working group. This was a very difficult battle because the Defense Department bought the argument that OSI would ultimately be the international standard since it was based on the collaborative work undertaken by the International Organization for Standardization (ISO). Even the Defense Department signed up for that, in spite of the fact that it had just rolled out TCP/IP.

Academia First TCP/IP User

So, we hunkered down to make it work. Our philosophy was to implement and test first and then standardize what worked. The other group generally tried to standardize first before there was evidence that the ideas would work. Very quickly TCP/IP penetrated into the academic world, partly because the computer science departments were doing the work in the first place. Interestingly, the telecommunications community basically rejected the whole idea as being silly. By about 1990, the National Science Foundation and the academic community were behind the effort, though we were still a few years away from the World Wide Web. Sir Tim Berners-Lee began work in earnest in 1989 on the Web while at CERN (European Center for Particle Physics)

What was the tipping point that caused the Internet to take off?

Interestingly enough, the explosion does not start with World Wide Web, even though it will be a popular theory that it did. The statistics tell us that the explosion started in 1988. This was the date that the community working on the Internet got permission from the U.S. government to allow commercial traffic. We hooked up MCI Mail to the Internet in 1989 and very quickly thereafter two things happened.

Once you start charging [for sending email] you create a variety of constraints you might not want.

First, several commercial Internet Service Providers (ISPs) emerged almost immediately. UUNET was one of them, ultimately acquired by MCI. Then another one was called PSINet, a for-profit spinoff from the New York State Education and Research Network (NYSERNET). The third one was called CERFnet, built by General Atomics in San Diego to serve the academic community in the Southern California area. It was named after me although I had no connection with it. After those three commercial services began in 1989, the number of hosts on the Internet started to double every year. Tim Berners-Lee also released his first World Wide Web implementation, though it was only a text-based thing at first.

Graphics, More Content, Major Tipping Points

In 1992, Marc Andreesen and Eric Bina at the National Center for SuperComputer Applications at University of Illinois released the graphical Mosaic implementation for the World Wide Web. Suddenly you are not looking at screens full of text anymore; you’re seeing imagery and pointing and clicking on hyperlinks to navigate the complex universe of documents. Streaming audio and video come later, but the graphics clearly galvanized the community. Moreover, it was a medium in which you had group interaction taking place as opposed to classic mass media, one-to-many, or point-to-point as in the telephone. So, this is instantly understandable as a rich environment and usage started to explode very quickly. This caused an avalanche of content to come on to the Net, and now we are talking about a tipping point, because the more content there is, the more interest there is in using the Internet.

Another thing that stimulated growth was a function called View Source. If you saw an interesting Web page, and you liked the way it was laid out, View Source allowed you to see the Hypertext Markup Language (HTML) code that was used to construct the page. So, people learned by example how to program HTML and create their own Web pages.

Later, we got tools that could generate HTML from popular text editors, allowing anyone to build a Web page. Content flowed quickly into the network since anybody could do it, and no permission was required. The next key time was 1994 when Jim Clark got Marc Andreessen and Eric Bina together and started Netscape Communications (it was originally called Mosaic Communications but ran into trademark problems). This was the initial public offering (IPO) of legendary scale that started the Dot-Boom in 1995, which became the Dot-Bust in April 2000. There was a lot of hype building up to this point, with statements that the use of the Net was doubling every 100 days. Our research shows it never grew at that pace, but that capacity was being built at that pace. On the other hand, a lot of people thought that growth ended in 2000. In fact, traffic and endpoints on the Net have continued to grow at a pace as high as doubling each year as it did throughout the buildup period.

Al Gore’s Role in Web Legislation

What role did Al Gore have in this?

Al Gore actually had a very legitimate role. My first awareness of his interest in it came in 1986. He had a hearing in September or October, some time late in the year. Bob Kahn, my partner in all this, was one of those who testified at the hearing and introduced the term “Information Infrastructure” in the discussion. At the end of this hearing, Al Gore asked the question to the panelists, “Would it make sense to connect the NSF sponsored supercomputers to each other with an optical fiber network?” He was thinking the information superhighway.

So, Al Gore invited us to go away for a few days the following year with about a 100 others, led by Gordon Bell at the National Science Foundation. We put together expansion plans for NSFNET with very significant growth in communication speeds (from T1 speed to DS3 speeds to OC3 speeds over a period of something like three to five years). There was a huge infusion of energy to grow the network, both physically and in the rate at which it could carry traffic.

Then Gore came back into this picture in 1992. The 1988 permission that I had gotten from the Federal Networking Council to put the MCI Mail system on the Internet was only for a limited period. In 1992 legislation was passed that Gore had helped to fashion and shepherd through the process, making commercial traffic legal on the government sponsored backbone network. So from 1992 to ‘95 we were seeing a demonstration of the market place using the government backbone. By 1995, you do not need the government network anymore because enough commercial capability had been developed by that time. The NSFNet was shut down. The government did need an experimental network and MCI won the bid to build the vBNS for continued research support.

I was selected to represent Boeing in a meeting with Al Gore in 1989. He invited a group of 20 or so companies to meet with him and discuss ways to make the Internet more widely accessible. So I have defended his role in the Internet, though of course he didn’t invent it.

Actually he never said that he did. The reporter did him in. I think he was having a conversation about the Internet in the context of legislative activity because it was in the legislative arena that he really did take the initiative. Al Gore really got it, and was certainly the leader for the legislature in networking.

Early Commercial Use Forbidden

In the early days, did you foresee any of the downside of the Internet, such as the hackers, viruses, and spam?

No. In fact, we reacted very strongly to the first piece of advertising that showed up. Somebody from Digital Equipment Corporation sent a job advertisement out on one of our academic distribution lists. There was hell to pay. Somebody called Digital saying, “If you ever do that again we will you cut off the network!” The community was sufficiently homogenous that there really was a strong set of controls.

Initially, email was free in the academic world, but we were selling commercial email service on MCI Mail, for example, as were others. So most people paid to do early email. We did not have any spam because spammers would have to pay. It is only as the Internet grew sufficiently large, by about 1992, and email was free to all, that large numbers of people started using email and the spammers showed up. In fact, it was at this time we started to see people apologizing for not having an email address on their business cards.

Though I had not anticipated the junk on the Net, I realize that the content is as varied as the population of users. Today, I just sort of sit back and say, “I have to accept this.” The price of being available to the general population is that the interests of the general population are going to show up on the network. There is just no away around that.

Would you advocate charging for it?

That would change the very nature of email, and I don’t think it would work. There is this freedom to communicate, which I would be hesitant to give up because once you start charging for it, you create a variety of constraints that you might not want. A lot of work has gone into creating filters for email. I have a lot of mail everyday and I do not see a whole lot of spam until I go and look at the filter or junk box. It is mostly there, which means the filters are working. Phishing and pharming (see sidebar for definitions), and things like that are even more pernicious as are the various denial of service attacks, the zombie systems and everything else; frankly, that is scarier than spam.

What do you think can be done about security?

This is not something easily dealt with. It is not just a question of putting technology in place although it can help. Some people think they can just encrypt everything. Of course, that does not work because encrypting at one layer in the network does not necessarily solve the problem that might show up in another layer. Operating systems have too many holes in them. Those who build operating systems (Microsoft, Apple, Linux, and all the Unix derivatives), all have products with flaws. We need to challenge the academic computer science departments to start focusing on how to build secure operating systems. We have to do something about this, and it is not simple because the hackers move as rapidly as holes are filled.

More Business Applications in Store

Do you think the transformation that has happened in business through the Internet is over and that we have figured it all out by now?

No. I believe maybe 99 percent of all the applications on the Net have not yet been invented. We are very early in this picture. Internet does disrupt the economics in some parts of business, and businesses need to react. Our business models have to change at MCI in order to accommodate the realities of the Internet. Voice-over IP is something we are investing in and have started offering these services. It is not as lucrative as the old model, but the old model is going away.

Even at Boeing we saw the transformation from the Internet—the way we deliver parts, the way people travel, what they do on the airplane—and these are still emerging.

I am hoping that Boeing will find a way to make broadband Internet available to everybody who rides in one of their airplanes.

What are you working on now?

The job of the Technology Strategy Group at MCI is to make sure that the company is not surprised by some disruptive new technology. We are also responsible for identifying opportunities for new businesses that might rest on top of technologies that are emerging. So, our primary role within the strategy organization is to help the company foresee products and services that could be developed to move us from the commodity world, which is where we have evolved, to a much more value-added environment. Managing services, hosting vertical applications in various segments such as the healthcare industry or financial services or digital media, distribution and production, these are all areas where we are very interested in moving—where we can add enough value to generate some margin. Everything else is commodity.

Neuro-Electronics, Nanotech Developments

What is the next big thing in technology?

I am very partial right now to neuro-electronics, where ocular implants are just beginning to appear and cochlear implants already are working very well. There is potential for new things interfacing to our neurosensory system or sensorimotor systems. Another area is the link between nanotechnology and sensory systems, creating systems that may be able to mediate the delivery of medications in your body, for example. Nanotechnology also offers the ability to capture information about any kind of biological hazards in very, very low quantities, so that even early release can be detected quickly.

Another important area is the Internet enabling of many appliances. You can see this starting to happen as mobile devices are becoming Internet enabled. They are becoming sensitive to location, so geolocation databases that bind products, services, and places are going to become increasingly valuable. My guess right now is that some of these trends will be seen most visibly in new forms of entertainment.

The heavy digitization of almost everything means that we are accumulating more and more valuable information that must be protected, so security is a key area. Then finally authentication is of emerging importance for many reasons. We need to assure the authenticity of this valuable information and assure it has not been modified. We must be able to authenticate the person we are dealing with over the Internet, whether for business or personal interaction. So there is a great deal to do, from new applications to new supporting infrastructure.

Do you think the telecommunications infrastructure is overbuilt? Some telecommunications watchers have said we have enough fiber in the ground to last us for 25 years at least.

There is a lot of fiber in the ground, and whether or not it will be useful to us even five years from now is still an open question. The older fiber requires a lot of repeaters for long-haul movement of data, which affects user performance. At MCI we are now replacing a lot of our core backbone with ultralonghaul fiber, which eliminates the repeaters and thereby improves performance. In the meantime, the large amount of fiber has reduced the telecommunications prices for customers dramatically. This, by the way, is true of underwater cables as well. So, the basic network has been commoditized.

Tom Friedman argues that all the excess capacity, including the underwater cable, has created the global business world we have today, leading to worldwide collaboration. He calls this the flattening of the earth [see InReview, p. 17].

Yes, that is a fair observation. The commoditization does enable business models that otherwise would not work. There is still not enough capacity at the edges of the network, however, and this is our real bottleneck. Two things are being introduced to address this: radio-based connectivity and increasing amounts of fiber going all the way or very near to residences. Fiber already goes to many businesses. With a highly populated building, you can always put in a point-to-point broadband circuit, but on the residential side, it is a slow process because of the economics. As we get more edge capacity, then we will be using up the core faster. I anticipate that is going to happen.

The World Goes Digital

If you look at the amount of digital information that you produce today, in a purely residential setting, it is pretty astonishing: digital cameras, digital video, and all the things people produce on their computers. A lot of information is being generated in digital form, which people want to share with their friends and their families. So, that means they are going to need some symmetric capability to transmit data. It is not just a question of people getting information from the Internet anymore. There is a growing use of the Internet for people working (or playing) together. Peer-to-peer applications, like the highly symmetric BitTorrent, are capable of helping distribute very, very large files all over the Internet, such as music and digital movies. Some of these files are probably copyrighted, and should not be reproduced or distributed.

We need to challenge the academic computer science departments to start focusing on how to build secure operating systems.

But I am on record saying that the peer-to-peer technologies are of interest for digital content distribution. It is a legitimate technology, which should be distinct from the fact that it can be misused. It is no different than the VCR, which also can be misused to reproduce copyrighted content. We should be careful not to kill the useful technology with our fear that it will be abused, and instead go after ways of detecting the abuse.

During my time in technology, I have observed that computer manufacturers like IBM, even chip manufacturers such as Intel, have been very aggressive in working with end-user companies including Boeing, Ford, banks, and insurance companies. They wanted to anticipate the next application with these end users, to help them absorb the increasing capability coming from the computing technology. I have not seen this approach from telecommunications companies, and have wondered why.

Telecommunications companies have been stuck in an old business model based in the world before deregulation. The local exchange carriers were reluctant to move into fiber until the FCC agreed that competitors would not have access to technology from another carrier. Only then did they start to put fiber to the home. At MCI, we have started to be more like companies such as Intel, in the sense that we want to stimulate people’s ideas for the use of telecommunications and computing with a variety of applications. We bought NetSec in order to bring more security capability to our customers. We bought a company called ICF from the UK, which does digital media production and some distribution. We are moving along the value added line wherever we can for precisely the reason that you point out. It is not that we want people to use more bits per second; that is a commodity. We want them to use more high-margin, high-value applications that we can offer.

Now that mobile devices are programmable, we have an almost unlimited range of things that you can potentially try to do. That is what is stimulating the telecommunication market right now, the programmable devices at the edge of the net.

Are excited about the technological future?

Oh! Absolutely, always! I even have a slightly optimistic view about improving security, which is the key to making many things on the network more useful.

What lessons can we learn from past technology transitions that might make the movement to these new technologies quicker or less painful?

Backward compatibility sometimes helps although the Internet was anything but backward compatible with circuit switching! I doubt there are magic formulae. Look how long it has taken for the IPv6 (sixth version of the Internet Protocol) to gain any traction: a decade or more. Where possible, retaining user interfaces can help a great deal.

Bill Joy, Sun Microsystems chief technology officer, raised a concern about potential downsides of increasingly more capable technology. How do you align with the fears he expressed?

I think Bill was over the top in his concerns although it made for a stimulating debate and lots of discussion. I do not think that we are likely to be taken over as a species by artificially intelligent nanobots. I do think, however, that we have some risk in our technology from another vantage point, because the systems that we are building and relying on are just too fragile. We are relying on them more quickly and more heavily than their resilience, resistance, and stability merit. I do not think that we can stop that, because when people see applications, they see convenience, and they really want it. But we have a responsibility in the technical community to do everything we can to make sure that that trust is not misplaced.