If you have an 8:30 a.m. appointment downtown, and it takes an hour to get there, what time do you have to wake up? Almost everyone will think about this problem backwards. That is, start at 8:30 a.m., allow an hour for travel, figure time for getting dressed, eating breakfast, etc. and you may conclude you need to set the alarm for 6:00 a.m.
It turns out this classic tool of thinking backwards is broadly used in planning problems, from simple ones like this to the challenge of building a new skyscraper. In their book The Minding Organization (reviewed elsewhere in this issue) Moshe Rubinstein and Iris Firstenberg refer to this as kniht (“think” backwards). Their conclusion, and mine, is that thinking backwards goes far beyond project planning. At the end of this discussion, I would like to apply it to addressing the problems of implementing large technology in a business.
Beyond the early, almost unconscious application of the principle to planning a trip, planning my day, etc., I encountered this way of thinking in graduate school in mathematics. We all know that computers use inexact arithmetic in carrying out computations. Errors are referred to as “rounding errors” when approximations are made in the computation.
In the 1960s people worried a great deal about the accumulation of these rounding errors through long computations such as weather forecasting, structural analysis, and the like. A whole science started to build around something called “interval arithmetic.” That is, could you carry through the computation the range represented by each number, so in the end you could have the range of the solution.? First, this is incredibly difficult. Perhaps more importantly, it tends to produce very pessimistic results.
Then James Wilkinson at the National Physical Laboratory in England made an important observation. He suggested looking at the problem backwards in what he called backward error analysis. Instead of trying to account for the accumulation of each error at each step in the computation, wait until the end and get a computed solution. Then ask the question, “Is this computed solution the exact answer to a problem close to the one I started with?” What followed was the demonstration that useful mathematical algorithms could reliably produce such results. This work may have done more for changing the nature of scientific computer based calculations than any other.
It is not much of a stretch from this to the world of manufacturing, though there is no evidence that the manufacturing community was ever linked with Wilkinson’s work. In the early days of manufacturing, production systems were built on the premise that work could be broken down into small simple tasks each requiring a predictable amount of time. A particular expensive piece of machinery might be able to produce at maximum capacity 500 parts in an hour. On the auto assembly line the installation of a door might be done in one minute. In a more complicated assembly like an airplane, a particular task may be planned for four hours.
Then the whole planning process for assembly could be built (perhaps even using backwards thinking principles) to produce a car or airplane in a predictable amount of time. The scheduling was very complicated, and improvements to such schedules were achieved using complicated optimization software in Manufacturing Resource Planning (MRP) models.
While it may be true that an individual can install a door in one minute, or a machine can produce 500 parts per minute, there are many sources of variation. For example, while most doors may be installed in one minute, a missing part or a distracted worker or a broken tool may change this to two minutes on rare occasions. How does the assembly line respond to this variation? What happens if things stop upstream, but a particular machine continues to produce 500 parts per hour? Where do you store all of the parts when everything else is stopped?
To avoid the problem, you could put more slack in the system. For example, you could allow two minutes for installing the door for the automobile instead of one. The difficulty is that this would make the entire system very inefficient to account for “worst case.”
You could picture this as a huge pile of snow building up on the front end of a shovel as it is moved up the path. Ultimately, things grind to a halt. In this sense it is much like trying to track the accumulation of errors in complex computation.
The solution again is to think about things backwards. At each stage of the process there are resources needed from the previous stage. Create a system where the upstream process reaches back in the system to call for what it needs. This is called a “pull” manufacturing system. Its remarkable simplicity can overcome many of the complexity issues of the ”push” systems. What you are really doing at each stage of the process is thinking about what is needed to produce the proper end state. An entertaining and popular discussion of the principles of such as system can be found in The Goal by Eli Goldratt (1984).
Many people carry similar black suitcases that can be pulled along on wheels. Recognizing one’s suitcase at the baggage claim area can be tricky, particularly with many similar bags. Backwards thinking would suggest that we arrange for it to call to us at the end. This could be done using an embedded computer chip that connects to our smart card as it comes down the line. Or more simply, we could tie a yellow ribbon to the handle to have it call to us visually.
Going beyond these very specific examples, we encounter the principle in other areas of thought.
For example, the second habit Steven Covey discusses in his book Seven Habits of Highly Effective People is “Begin with the End in Mind.” In the Bible, Jesus asks the question “ What good will it be for a man if he gains the whole world but forfeits his soul?”
Backwards thinking is thus a fundamental part of project planning, scientific computing, manufacturing, tracking your suitcase and planning your life. It would probably not be difficult for most readers to fill in other areas of application of this principle.
How does this apply to technology in business? When I was managing R&D in information technology for Boeing, I would usually ask the question in a project review: “What happens if you are successful?” In other words, completing the project with all of its technical objectives wouldn’t bring value to the company unless it could be implemented in the business and produce business value. Often the barriers are not technical but involve thinking ahead to how these project results would work with others and would affect the work people do every day.
However, because of the complexity of many of today’s systems, unintended consequences often result from such systems. They do what was intended, but they do some other things as well. Edward Tenner discusses many fascinating cases of unintended consequences in his excellent book Why Things Bite Back. In one case he asks why protective equipment was introduced in American football. The obvious answer is to protect the players and make the game safer. Interestingly, the statistics prove otherwise. The protective equipment changed the nature of the game resulting in harder hits leading to more injuries.
We must try to understand what unintended consequences might result in the implementation of technology in the business? Of course this is not easy. But by thinking backwards, beginning with the end in mind, we might together try to picture what life will be like with the new system operating.
How will jobs change? What happens when the system fails? Are there things we now are doing that will no longer be possible? Can we get by without these things? I have observed that often systems are designed to solve known problems, but consideration is not given to assure that things now working will continue to work. This is because many of the things now working are adaptations that have been made to work with the existing system. These benefits are often not documented or even widely known.
Beginning with the end in mind, thinking backwards, will not solve all of these issues. We will never anticipate all of the unintended consequences. But thinking backwards is a powerful tool that needs to be much more widely applied in the continued application of technology to business.
Al Erisman is executive editor of Ethix, which he co-founded in 1998.
He spent 32 years at The Boeing Company, the last 11 as director of technology.
He was selected as a senior technical fellow of The Boeing Company in 1990,
and received his Ph.D. in applied mathematics from Iowa State University.