In 1966 with little fanfare, Douglas F. Parkhill published a book outlining “[a]lmost all of the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply)”.1 This book, The Challenge of the Computer Utility, published nearly a half-century ago, provides a vision for the future of computing, wherein it is treated just as any other utility (e.g., electricity, gas and phone): always available in an on-demand manner.
As generally envisaged, a computer public utility would be a general-purpose public system, simultaneously making available to a multitude of diverse geographically distributed users a wide range of different information-processing services and capabilities on an on-line basis. As in any utility, the overhead would be shared among all users, with each user’s charges varying with the actual time and facilities used in the solution of his problems. Ideally, such a utility would provide each user, whenever he needed it, with a private computer capability as powerful as the current technology permitted but at a small fraction of the cost of an individually owned system.2
This really sounds like Cloud Computing, doesn’t it?
However, I would put forward the idea that it’s really about something larger. It’s about making the ability to do computing ubiquitous. It’s about putting the tools into people’s hands to solve problems. It’s about the evolution of computing in the modern age. HPC is a big part of this—a foundational piece, in fact. HPC will serve as one of the main drivers in our evolution towards Douglas Parkhill’s vision.
In a general sense, the defining of “computing” can be rather difficult—not because it is unknown, but because the definition is so vast. Consider the following:
In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast.3
I particularly like that last sentence: “The list is virtually endless, and the possibilities are vast.” The ways we use computers to solve the myriad problems facing humanity are mind-boggling. In a slightly tongue-in-cheek manner, a recent post of mine stated an ideal HPC administrator would need a working knowledge of “material sciences, life sciences, physics, astrophysics, nuclear physics, geology, manufacturing and design”.
Well… it was mostly tongue-in-cheek.
The uses for HPC centers is growing as more and more industries and companies find ways in which they can benefit from the power of supercomputing. At the same time, the non-HPC parts of scientific, technical and research computing are beginning to rely more and more on output from HPC. Some concepts in information computing and Big Data look eerily like some (please don’t take offense here) bastardized form of HPC. From a purely technical point of view, they aren’t, but the similarities are spooky.
However, what it all comes down to is tools. The purpose of an HPC center is to provide tools to the users to accomplish their tasks. Let’s talk about how I see this evolving in the future.
The TC Center
I believe in the future that many, though by no means all, of our HPC centers will actually evolve to be technical computing (TC) centers. By this I mean the hardware and software within the center will be dynamically re-purposed to suit the needs of the workload being run. The hardware that today is running a Hadoop map-reduce process will be running an MPI job tomorrow. Next week, it could be running a hypervisor with VMs for doing modeling and visualization. We’ll be adapting the center to meet the needs of the workload, not the workload to meet the needs of the center.
As they compete with the ever-growing number of online offerings, many HPC centers will be driven to continue to provide new offerings to their end users. Some of these offerings may have little technically in common with HPC, but they are compute workload nonetheless. The pressure will be there to grow and expand.
Do I think this will happen soon? No—at least not fully.
There are many, many issues that will need to be solved before a full transformation occurs—some technical, some practical.
Depending on usage patterns, there can currently be a significant cost differential between an HPC node, with its storage and networking, and a hypervisor for serving VMs. This difference in hardware needs tends to create artificial silos in our mental models of how computing centers need to be architected.
It’s going to happen sooner than we think.
There will always be the need for bespoke HPC clusters. The TC center isn’t a replacement for those. However, it is the idea of turning the HPC center into a workshop comprising many different tools for solving technical and scientific questions.
HPC evolves to TC.
I’m going to be discussing this topic in more detail at SC’13 later this month. If you are joining us in Denver for the show, please stop by the session Thursday afternoon (4:00pm) and share your thoughts on the subject.
Alternatively, you can find me at our booth (#3113). I’d love to discuss this with you further.
|1||http://en.wikipedia.org/wiki/Cloud_computing#The_1960s.E2.80.931990s Accessed 4 Nov 2013.|
|2||Parkhill, Douglas F. The Challenge of the Computer Utility. Massachusetts: Addison-Wesley Publishing Company, 1966. 3. Print.|
|3||ACM. The Joint Task Force for Computing Curricula 2005. Computing Curricula 2005: The Overview Report. http://www.acm.org/education/curric_vols/CC2005-March06Final.pdf Accessed 4 Nov 2013.|