In the past several years, we have badly beaten our planet and its people with two major events: the worldwide economic crisis of 2008–09 and the Gulf of Mexico oil environmental and economic disaster of 2010. Both produced challenges of unprecedented proportions that will be felt for decades to come. Either was significant in its own right — together they have landed a gigantic one-two punch on far too many people. How could these two disasters have come so close to each other since they are clearly unrelated? Or are they?
The more I have read about the twin disasters, the more I have come to believe they are connected. I will include several quotes from The End of Wall Street by Roger Lowenstein, reviewed elsewhere in this issue, as he specifically discusses the role of economic models in the financial meltdown. I have some personal insight into the role of mathematical models in oil exploration and extraction.
Here are three interrelated areas, rooted in technology, that these two disasters have in common.
Short-Term Thinking
1. Technology is at the heart of our short-term world where business leaders push the limits on quarterly (or shorter) returns. There is evidence of this both from the banking leaders and the British petroleum leadership. Short-term goals lead to “cutting corners” while making decisions with little regard to long-term consequences. The bankers and drillers both demonstrated this. Extraordinary risk was ignored in light of near-term profits.
What does technology have to do with short-term thinking? Trying to maximize financial return over the long term is technically a very difficult problem. What most people do in trying to solve this problem is to simplify it, reducing the problem to maximizing return in the short term. I wrote previously about this in Issue 66. ethix.org/2009/10/01/the-new-capitalism/.
Lowenstein comments on the role of short-term thinking in the economic crisis. For example, he says of Merrill Lynch: “… their incentives were biased toward maximizing short-term profits,” p. 75.
Regarding the Gulf of Mexico oil disaster, a June 15, 2010, story from the Associated Press reported,
“BP made a series of money-saving shortcuts that increased danger in a well described by an engineer as a “nightmare” just six days before the April 20 explosion that caused the worst oil spill in U.S. history, according to documents released Monday by a congressional panel.”
Risk in Complex Situations
2. The risks associated with the creation, packaging, and sale of subprime loan derivatives, CDOs (collateralized debt obligations), and the risks associated with deep-water drilling, were either not understood by the business decision makers, their boards, and the regulators, or were understood and ignored. Recognizing the true complexity of these situations, let’s assume the former — that the risks were not understood.
In the economic arena, here are two quotes from Lowenstein regarding executive understanding of the complex economic issues:
At AIG, “Joseph Cassano, the aggressive and volatile maestro of AIG’s swaps unit … assured the brass that … AIG had insured only the highest-ranking ‘super senior’ level CDOs….Cassano frequently attended committee meetings of the directors, in whose company he was naturally charming and genteel. He emphasized to the board that, regardless of what was happening to the market value of CDOs, the instruments were safe, and would remain so barring a catastrophic recession. The directors, understanding little about CDOs except what Cassano told them, were reassured,” pp. 112, 113.
Robert Rubin, chairman of the executive group of Citigroup, and former Treasury secretary, “… leaned with the odds. Unlike some senior execs, he understood what a CDO was, but not at the level of detail that might have aroused his concern. This half-knowledge was potentially lethal. He was enamored with the brainpower and mathematical elegance of academically trained financiers.”
To gain insight on this issue in the case of deep sea drilling, I asked the former CEO of a large oil company for his insights into the cause and cure of the oil spill in the Gulf of Mexico. Here was his response:
“I’m afraid this incident is way beyond my experience or understanding. The deep water and complex subsurface technology must make for huge challenges. The whole industry will be willing BP to find a speedy solution, and all of us will be praying that the slick dissipates without causing any lasting damage to marine environments or indeed the shoreline. After the volcano in Iceland, we are reminded again of the forces of nature and the limits of our response.”
As technology enables us to create more and more complex entities, the gap between decision maker understanding and reality must grow. I have identified this problem and offered some suggestions in a previous column in Issue 51: ethix.org/2007/02/01/executive-decisions-about-technology/.
Mathematical Models
3. In both the cases, the businesses depended on complex mathematical models to represent reality: value and risk associated with complex derivatives of subprime mortgages in the case of the banking industry, and guidance on how to drill and remove oil from the deep water in the case of the petroleum industry. Neither activity would have been possible without the use of the insight from the mathematical models made feasible by technology. There is good indication that these models were not understood by the ultimate decision makers (point 2), but also they did not represent all of the complexity of the situations they were trying to model.
We probably don’t recognize how much our modern lives depend on such models. Pharmaceutical drug design, the design of bridges and tunnels, the staging of parts in the production of automobiles, the methods to carry out Google and Bing searches, and even the decision as to which elevator will respond to your call all depend on mathematical models. The good news is that these models can very accurately represent reality in a large variety of circumstances, enabling all sorts of good things. The bad news is that it takes great care and insight to understand the limits of these models and the assumptions rooted deeply in them.
Models and Reality
In the case of the economic models, there is evidence that this was not understood as well as it needed to be. Here is another quote from Lowenstein regarding the role of Ben Benanke, chairman of the Federal Reserve:
“What is notable was Bernanke’s assumption that the academy now understood perfectly the dynamics of one of the most complex economic eras in American history. Real life is messy and admits to doubt. Bernanke’s research was steeped in econometrics, which offers the certainty of computer models,” p. 85.
All models contain assumptions that must be understood by the users of the models. Here is another example from Lowenstein. In a meeting between ratings agency Fitch Inc. and Thomas Atteberry at the FPA New Income Fund,
“Fitch reiterated that its models forecast smooth sailing for mortgage securities … Fitch’s model assumed that housing prices would rise, as they had during the boom, by an annual percentage in the low-to-mid single digits. During the question-and-answer period, Atteberry asked what would happen to the model if the housing prices were, instead, flat. Fitch admitted that the model would start to break down. ‘What would happen if housing prices fell by one percent or two percent?” Atteberry wondered. In that case, Fitch replied, the model would break down completely,” p. 88.
Similarly, models dealing with underwater drilling and extraction can guide the processes very well, but also contain hidden assumptions. It takes discussion with thoughtful questions to even identify what these assumptions might be and how they represent reality in unknown situations.
One of my all-time favorite executives at The Boeing Company was Bert Welliver, the chief technology officer of the company in the early 1990s. At one of our meetings he told me he was worried about the model development area at Boeing and cautioned us not to make the models too easy to use. “I worry about unskilled people getting their hands on these models and using them in a way they were never intended,” he told me. “We must make sure that the assumptions contained in these models are readily understood and recognized whenever they are used. The worst scenario is where someone uses the results of a model to make decisions when the results are completely wrong,” he said.
Even the scientists who develop these models often don’t fully realize how dependent the models are on the assumptions that go into building them, since some assumptions are “too obvious to state.” The modeler may never have anticipated someone using his or her model in the way they later do. Another colleague at Boeing recommended that a great deal of time should be allocated to the “stress testing” of the models, giving the users a “feel” for what the models could and could not do. This testing should include people who did not build the models, to surface assumptions that may not have been explicitly stated. And the executives who make decisions based on these model results are going to have to get a great deal more conversant in the strengths and limitations of the models.
It is difficult in this fast-paced world, driven by the bottom line, to allocate the time and expense to stressing the models and extending greater insight in the use of these models to executives and decision makers. Yet the failure to do this is part of both the economic crisis and the oil disaster.
Conclusion
The frightening thing is that technology will take us further in the direction we have been headed, enabling the push for even shorter-term results, more new ways to make money from very complex situations, and an even greater gap between the capability of the tools and the insight from those who use them. The good news is that these issues can be addressed. Perhaps these twin disasters will provide the impetus to act more appropriately moving forward.
We are left with a choice, to slow down a bit and better understand the powerful tools we are building and the implications of their predictions, or to plunge ahead at very high risk. Which path will we take?
Al Erisman is executive editor of Ethix, which he co-founded in 1998.
He spent 32 years at The Boeing Company, the last 11 as director of technology.
He was selected as a senior technical fellow of The Boeing Company in 1990,
and received his Ph.D. in applied mathematics from Iowa State University.
Wasn’t this also some of the issue at Enron where highly complex, hidden-off-balance-sheet financing created a similar scenario, with a few players pushing for higher short term numbers? Granted, not as technologically sophisticated, but still understood by only a few. I heard from and was impressed by Sherron Watkins this past week here in Winston Salem on the tenth anniversary of Enron at Salem College, which is what makes me think of this tie in!
This also happens in consultive selling where highly sophisticated models are applied to the benefit of the seller and not the user – who is often unsophisticated – often resulting in highly complex technological solutions being sold and applied when a simpler and less costly fix was in order.
I just enjoy reading these articles!!
Evidently Toyota has come to a conclusion that is similar to what I concluded in this article. An article posted July 7, 2010 (http://finance.yahoo.com/news/Toyota-adding-more-time-to-apf-3145545797.html?x=0&sec=topStories&pos=8&asset=&ccode=) began:
TOYOTA, Japan (AP) — Toyota Motor Corp. is extending the time it takes to develop new vehicles by about four weeks for more quality checks in the wake of its massive safety-related recalls, a top executive said Wednesday.
Executive Vice President Takeshi Uchiyamada said the company has learned a lot from its recalls of more than 8.5 million vehicles worldwide, including the need to slow the pace at which it develops new cars.