Artificial Intelligence

This time it looks real.

One of the few advantages of being older is having a perspective on the advances in technology. I was a student in the 1960s when the first promises of artificial intelligence were made with great fanfare. But the excitement and hype died rather suddenly, in part from a paper titled “Perceptions” by Marvin Minsky and Seymore Papert. They identified fundamental limits of AI that led to a great decrease in research funding, ushering in the AI winter of the 1970s.

A second round started in the 1980s. Technology had become more powerful and the PC had come on the scene distributing computing power to the masses. Expert systems (ES) were being touted as the replacement for many human decision-making challenges, including replacing much of what doctors (or pilots) did. Surely this was the time for AI systems to make a difference. In reality, complex decision making was much more challenging than AI enthusiasts had believed. Much of the work moved from expert systems to expert assistants — parts of the problem could be handled by the ES, but final judgment rested with the person. Useful (sometimes) but a long way from the promise.

In the 1990s, virtual reality became a focus — the creation of a virtual world where humans could experience a different reality than they would be able to do in real life. Much of this was reserved for games. I well remember racing down a slalom course on a VR system, competing for time on ski slopes I would never attempt in real life. The vibrations in the skis and the visual cues were amazing and fun. The lack of pain from a crash was even better. In reality, this was far from reality.

At Boeing, we began to look at this technology for business use in a laboratory I led. Bob Abarbanel led a team in developing FlyThru, a VR system that allowed engineers, managers, and potential customers to “fly through” an assembly of electronic parts as if it were a real airplane. This became a key tool in the design of the 777 airplane. David Mizell headed the team to allow those in the factory to try assembly procedures in the virtual world. Tom Caudell had the idea of merging the virtual and real worlds together, projecting instructions for a repair procedure onto the physical part of the airplane, giving the mechanic hands-free access to information. He coined a new term for this, which he called “augmented reality,” in 1990. I admit I smile when I see the current hype about virtual and augmented reality as if they were something new. It provides an example of a popular phrase often used by technologists: “The future is already here, it is just unevenly distributed.”

When IBM’s Watson defeated the then reigning world champion in chess Garry Kasparov in 1997, new hype began. These systems are going to rule the world. Chess had been seen as the ultimate challenge demonstrating that any activity of the human brain is fair game, according to the promises of the 1960s. This thinking simply demonstrated that these researchers did not understand the human brain. In May 2017, a computer defeated Go champion Ke Jie. Since Go is considered the most complicated board game, the promise of AI seemed even more real.

Now, technology seems to have arrived at a point where these systems will have a greater and greater impact on all of us. They will invade our lives, our workplaces, and society in ways that will produce much more substantial change than all that has happened in the past 50 years. These systems will make our lives safer and better; they will make products, cheaper and better; and their promise is real enough to lead to substantial investment in the companies that build them. Yet at the same time there are significant questions about such systems that should engage us — not with an emotional resistance nor with fear, but with careful thought at the levels of design, personal use, organizational use, and societal policies and impact. To engage thoughtfully requires that we understand enough about these systems to inform our responses to them.

Here I will focus attention on two of the questions I believe we should be considering. The first is: What can go wrong with such systems? Albert Einstein once said, “We cannot solve problems by the same kind of thinking we used when we created them.” The second question is: How might such systems impact society? But first I want to briefly describe how AI systems are different from traditional computer programs, because this understanding informs our response to the two questions.

AI systems features

Modern AI systems differ from standard computer programs in an important way. A typical computer program follows an algorithm, a step-by-step procedure that starts with certain data and instructions and ends with a result in a repeatable, reliable way. A recipe for a cake follows this pattern. Given a set of ingredients, combine them in this way, cook them at this temperature for this period of time, and at the end we have our cake. This is also precisely what an accounting program does. Given this data, produce a cash flow or profit-and-loss statement. The human did the thinking, laid out the steps, and the computer carried out the calculations producing the results.

AI systems work differently.1 The human may not understand the process, but the person feeds the system some rules of the game, some examples of good output derived from input2, and the computer system (or learning system, in this case) figures out how to produce good output from given input. To emphasize, the person behind the system did not provide the instructions, indeed may not understand how to provide the instructions, but the learning system figures out a way to produce a result from the input. In a sense, this is how a child learns. Lots of trial and error, some false starts, some correction, and then she or he learns.

Here are three examples. It used to be that computer-based language translation was based on the programmer providing the instructions for translating a document, based on vocabulary, rules of grammar, and so forth. The results were poor. Computer-based translations were barely readable, and at best were aides to a human translator. More recently, work on computer-based translation has followed a different course. Provide the learning system with documents in one language, examples of good human translation, and let the system determine the way to get from one to another. Such systems have made a significant improvement in language translation. An excellent summary of this can be found in “The Great AI Awakening,” New York Times Magazine, Gideon Lewis-Kraus, December 2016.

A second, simpler example is about teaching a learning system to do what many children can do1. How do you tell the difference between a wolf and a dog? The distinctions are challenging to describe in some sort of step-by-step procedure, though many children can tell the difference. The AI programmer provided a series of pictures to the learning system, properly identifying dogs and wolves in the sample learning environment. Once the system had learned the difference, they subsequently fed a variety of new pictures to the system, demonstrating that it had learned to distinguish dogs from wolves.

A third, harder example is self-driving cars. It would be impossible to lay out a step-by-step procedure for all of the decisions a person must make driving across town. But it would be enough to feed a variety of rules to the car’s AI system and let it learn how to drive, much like a teenager learns to drive. Speeding is bad, going too slow is bad. Crashing into cars or pedestrians is bad. Anticipating and avoiding accidents from other vehicles is good. Understanding the shortest way to get to the destination and following is good. And so forth. With experience and testing, the car learns to drive, to navigate through traffic, to avoid accidents, to take the best route, etc. The advantage of a car learning to drive is that the result can then be downloaded to other cars. As experience grows, cars can share the learning they continue to do with other cars. The result is safer, more reliable, driving.

Auto accidents killed about 40,000 people on the highways of the U.S. in 2016. Distracted or drowsy drivers account for many of these, and computers don’t get distracted or drowsy. Self-driving cars, while new and frightening to many, are already better drivers than humans by most measures. And they will continue to get better. Because they are new, humans do a poor job of comparing risks. That is why a single accident by a self-driving car in California rates headlines around the nation, while thousands of other more serious accidents happen every day involving cars with drivers.

What can go wrong?

So what can go wrong? There are many possibilities, but here are three.

First, the systems learn by matching patterns based on a set of goals and constraints. The goals may not be as complete as they should be, and the patterns, while often reliable, may not lead to the best conclusions.

Let’s return to the example of telling the difference between wolves and dogs.

The system learned well and was accurate based on the pictures provided. But one day the researchers noticed that the system was getting a number of wrong answers. Why was it mixing up dogs and wolves? The decision criteria had not been prescribed, but had to be deduced from its learning from known patterns. It turns out a key distinguishing feature the system had learned was that wolves were the animals standing on snow and dogs did not stand on snow! Yet another demonstration that correlation (apparently the test pictures featured wolves on snow) is very different from causation.

There will be times when the system, say the self-driving car, learns the wrong thing. Or it may lack maturity in its learning. In the case of human drivers, we have made peace with coming to these judgments. We still license teenage drivers even knowing that the risks of accidents are higher, that the maturity of judgment may be less. How do we make this judgment about self-driving cars?

Second, any connected electronic system is vulnerable to tampering or hacking. Such “interference” may not appear to be similar to issues of cars with drivers, but perhaps this is related. We know something about distracted drivers, particularly with accessing a phone.

The third is simply reliability. This week my Comcast email failed for 12 hours, and no messages could come in or out. “Rebooting” an electronic system is an all too familiar task. It is one thing, however, to lose email access for a while, and quite another for a car to suddenly stop functioning.

Mitigating these three (and other) potential failures will be a challenge. But this challenge should be addressed in the context of the clear benefits: Self-driving cars must be safer and more reliable than cars with drivers for this to be of value, and they already are.

How will such systems impact society?

There are two quite different ways that AI systems will impact society (and I am sure, many others as well). The first I want to mention briefly relates to jobs.

Today, three million people earn their livings from driving buses, cars, trucks, etc. If the switch to driverless vehicles happens quickly, as many are predicting, this will be a huge disruption in the labor force. Add to this those medical jobs related to reading X-rays by a radiologist, supporting diagnosis, testing pharmaceutical drugs and we see a significant number of jobs that are also vulnerable. Sorting legal documents, working through accounting categories, and so many other diverse jobs are also at risk.

It is easy to argue that we have been through this before. The Industrial Revolution is one example, as is the larger migration from farm to city jobs. Many of these, however, had longer implementation times, enabling the retraining of workers for other semi-skilled positions. This time things can happen much more quickly, and the retraining may involve much more complex new skills that both take longer to learn and don’t fit everyone. On the other hand, there are many jobs that need to be filled but do not pay very well — service jobs for an aging population is but one example.

I believe society will need to wrestle with the separation of work from pay, and perhaps at some point consider a universal income with all of its potential downsides.

The second societal impact is to develop laws that deal with a fundamentally different set of regulation issues. To take but one illustration, traffic laws are set up to regulate speeding, control at intersections, driving drunk, and other distracted-driver issues. Drivers get tickets for all of these infractions. These are not the issues for driverless cars. These cars will obey the speed limits, stop at stop signs, avoid collisions, etc. But how do these vehicles need to be regulated for safety?

Looking at the current regulations, we admittedly stumbled toward our current system through the gradual changes that came about through automobiles and other technological changes. My grandfather, for example, started driving without a license. I never had drivers’ education, except from my father. But we have recognized the need for more as more cars hit the road. We will have less time to adapt to these new changes.

What constitutes “passing a driver test” for a driverless car? If one gets in an accident, who is accountable? As you take AI into many areas of society, we must think about what a civil society looks like with robots and AI systems.

Conclusions

There are so many other issues, as well. Some people just love to drive, for example, and will they give up their cars easily?

But love it or hate it, these changes are coming with the clear promise of safer, cheaper, better. We won’t fight this any more successfully than the Luddites fought the factories. Will we have good dialogues and explore the issues for mutual benefit, or will we polarize around these issues as we have around so many other issues?

 

Al Erisman is executive editor of Ethix, which he co-founded in 1998.
He spent 32 years at The Boeing Company, the last 11 as director of technology.
He was selected as a senior technical fellow of The Boeing Company in 1990,
and received his PhD in applied mathematics from Iowa State University.

aerisman@spu.edu

1 Tripp Parker, recently from Microsoft, shared with me both the description of how AI systems work and the example of learning to tell the difference between pictures of dogs and pictures of wolves. Tripp has degrees in computer science, computer engineering, and philosophy and is a smart young man. It encourages me to see thinkers like him in the upcoming generation.

2 Parker suggested this clarifying note on AI systems: Supervised learning is where you use give the system information (e.g. “this is a wolf”), the models get trained based on that, and then (hopefully) it’s able to correctly handle new inputs that it hasn’t seen before. There’s also “unsupervised” learning, where you don’t even tell the system what information about a given input. An example of unsupervised learning would be you have bunch of photos of six people, and you want your system to divide this dataset into six piles, each with photos of one individual. You don’t tell it who is who, and indeed you don’t even tell it that you want each pile to be a photo of an individual. It just figures out the similarities between the photos (assuming you have enough of them, and they’re sufficiently differentiated in other ways).