Feedback

TechWatch: SuperComputing 2005 — Hype Meets Geek

Seattle, Washington • November 2005

The international convention was a cross between the academy and an auto show. It had more than 9,000 registered attendees and 200,000 square feet of exhibit space featuring 265 exhibitors from industry and research. Exhibits included hyped presentations introduced with the words, “Step right up for a chance to win a cruise in our drawing at the end of the presentation.” Free food and drinks were in abundance. So were giveaways, including the little ball that lights up when it’s bounced, with the help of an Intel chip inside — my grandson loved it.

At the Sun booth, a magician performed tricks for a large crowd. “The company spares no expense in reaching these attendees,” said Rich Brueckner, Sun’s marketing manager for high-performance computing, as reported in The Seattle Times, November 16, 2005.

“For a lot of them, this is the only show they go to and they are shopping for multimillion-dollar computer systems.”

In parallel, 62 best papers selected from four times that many submittals, were presented in traditional style with PowerPoint slides and serious tones. An overflow crowd heard Bill Gates present the keynote lecture, “The Future of Computing in the Sciences.” Awards to best papers were given in the names of luminaries from supercomputing such as Seymore Cray and Sid Fernbach. Six papers at the conference were nominated for the Gordon Bell Prize, recognizing outstanding achievement in high-performance computing. SC Global streamed the conference to 44 sites in 10 countries on six continents.

I attended the conference for two reasons: The first was pure nostalgia. I used to do research in this field and speak at the conferences. I remembered having lunch in Colorado with Seymore Cray, truly the father of supercomputing. I used to be on a review committee at Los Alamos with Sid Fernbach. I first met Gordon Bell at a conference at Cornell University in the 1980s. Like any other field of study, supercomputing forms an interest group where lifelong friends are made.

Direction for Computing

The second, more important reason for attending the conference is that supercomputing is the wedge to the future. It is here that the envelope is pushed for more speed, more memory, more bandwidth, and more things done in parallel. One booth even flew a banner with the statement generally attributed to William Gibson: “The future is already here — it’s just unevenly distributed.” Many ideas from this conference will find their way into common use in the future.

It’s in supercomputing that you hear the term “teraflops,” meaning trillions of scientific calculations per second. Even “petaflops” — 1,000 teraflops. Many of the researchers discussed problems with terabytes of data (trillions of bytes of information). And in the name of speed and pushing boundaries, more and more things are being done at the same time by dividing the work into pieces and using lots of computers (strung together with networks to form clusters) or processors on a chip (micro parallelism).

IBM’s exhibit featured the Blue Gene/L — a computer with 32,768 processors and alleged to be the fastest computer in the world. Riken Super Combined Cluster System from Tokyo was there, promising a peak performance of one petaflops by 2013. Aspen Systems, from Colorado, was promising affordable high-performance clusters (and giving out those bouncing balls with the chip inside). Boeing, government research labs, and university research labs were on the floor showing promising applications of the use of supercomputer systems. And, of course, exhibiters included Cray, HP, Microsoft, Sun, and a myriad of others.

Why does anyone need this much power when most desktops are as powerful as supercomputers from the 1980s and more powerful than most applications require? Computer scientists are tempted to answer: Just because you can. But there are lots of other reasons to push this envelope. The reasons range from games (a big and growing business) to serious engineering and scientific computations.

Applications for This Power

The predecessor to IBM’s Blue Gene, the supercomputer Deep Blue, defeated Garry Kasparov, the reigning world champion chess player in 1997. This led to a series of hand-wringing articles on the future of the human race when a mere computer can defeat the best of the best in a “thinking game.” The Riken group had a display where the participants could put on goggles, pick up a bat and face a virtual “pitcher” in 3D baseball. (I blamed my poor hitting on the system alignment rather than my skill!)

More serious computational feats, though not necessarily more difficult, were also demonstrated. The Boeing booth showed very complex calculations associated with airplane performance. Today’s modern passenger and military airplanes could not be designed without the aid of supercomputers that have now largely replaced costly and time-consuming testing and physical mockups. The National Center for Atmospheric Research demonstrated models for improved weather forecasting and climate studies. There were also studies of pharmaceutical drug design and petroleum exploration.

Two intriguing talks in the High Performance Analytics section focused on earthquakes and traffic. Moustafa Ghanem, from Imperial College London, showed progress in understanding the earth’s movement relative to earthquakes. We are still far away from being able to predict earthquakes, he said, but the more detailed models made possible by supercomputing hold out promise the future. Robert Grossman, from University of Illinois at Chicago, looked at traffic patterns from sensor data — lots of it. He had access to the data from 830 traffic sensors around Chicago studying 170,000 readings of data per day over a 10-month period. The goal of his team was to better understand traffic flow, even to the point of predicting accidents.

There are some common factors in these diverse applications. All of them continue to push the limits on computer performance. Many still require days of computing power from the most powerful computers in the world. Brute-force computation is not adequate — scientists are required to create very efficient algorithms in order to do these calculations at all.

Understanding the Challenge Ahead

This work affects computing in general — not just scientific computing. In his talk, Gates said, “Parallelism is the key to all continued advancement in computing.” Clock speeds are continuing to approach their limit, so continuing to keep up with Moore’s law for computer performance will require using clusters of computers and exploiting more and more processors on a single chip.

Jack Dongarra, distinguished professor at University of Tennessee, got more specific on this point. Heat on the chips is becoming an unmanageable problem, so more speed won’t be achieved with faster clock speeds, he said. With many processors on a chip, speed for calculations will continue to increase at 60 percent per year, but rates for moving data from storage to these processors will increase at only 23 percent per year. This means that unless the software in common use today is redesigned to take advantage of the new architecture, performance advances will slow dramatically. The power will be there, but the ability to use it will not be — just like a sports car following the traffic rules on city streets.

There is more than number crunching required for these models, Gates said. “The goal is to reduce time to insight, not just the time to calculate some complex model,” he explained. This requires careful organization of the entire workflow, whether in state-of-the-art scientific calculations or traditional business systems. Scientists have used small elements of computation as a benchmark of supercomputing performance for many years. But following Gates’ argument, we need to go beyond these benchmarks to measure progress.

A User Perspective

All of this is quite exciting for the technology community. Many new technical challenges are on the horizon and there is no concern that the frontier has been tamed. Users may be less excited.

First, moving forward with applications will be a challenge if users need to keep up with improved performance potential in the hardware. Redesigning applications will be costly and while new opportunities will be opened up, it will not be a smooth path forward. Of course, users can “opt out,” saying they are satisfied with what they are able to do with today’s computing capability. Unfortunately, in a competitive world the strategy of opting out of new technology only works when your competitors also choose to opt out. These advances will create new winners and losers among both the applications developers and the users.

Second, understanding the results from the massive computations will continue to grow as a challenge. In the recent past, there was a view espoused by several of my former bosses at Boeing that one should fully understand the way a system works before one automates it. But today’s high-performance computing makes it possible to do things that cannot be verified in the old way. New strides in verification will be a challenge to all users of these more complex systems.

Third, the complexity and interconnectedness of these new systems will raise the stakes regarding what happens when systems fail. As we become much more dependent on such systems, systems reliability becomes an even bigger issue. So does security to protect the integrity of the computing. A hacker’s capacity to do damage grows with the complexity of the systems and detecting the hacking will be more difficult than ever.

Conclusion

Supercomputing offers a window on an exciting new world that will impact all computing and, hence, all users of computers. The changes ahead offer significant performance potential, hence opening up new opportunities. But finding value there will not be a smooth transition from the computing of today. In addition to the opportunities, there will also be many expected and unexpected the downsides. This would be a good time to make sure your company has committed sufficient resources to look to the future without sacrificing the necessary effort of keeping its operations running today.

erisman-thumb

Al Erisman is executive editor of Ethix, which he co-founded in 1998.
He spent 32 years at The Boeing Company, the last 11 as director of technology.
He was selected as a senior technical fellow of The Boeing Company in 1990,
and received his Ph.D. in applied mathematics from Iowa State University.

Share Your Thoughts