Trev Harmon

  • Home
  • Business
  • HPC
  • Cloud
  • Big Data
You are here: Home / HPC / Why Individual Metrics Like Linpack Aren’t the Future

July 25, 2013 by Trev Harmon 1 Comment (REPOST)
July 25, 2013 at Adaptive Computing (ORIGINAL)

Why Individual Metrics Like Linpack Aren’t the Future

Numbers 7/52

Before we get going here, I need to say I’m not implying metrics like Linpack aren’t useful. They are.

The Coming of HPCG

Obviously, this post is inspired by some the recent statements by Jack Dongarra, the original creator of Linpack back in the 1970’s, and the resulting chatter.

Linpack rankings of computer systems are no longer so strongly correlated to real application performance.
~ Jack Dongarra

He is basically saying that as a single number Linpack is not a true measurement of performance and that relying on it as such is a bad idea. Nearly that same sentiment is expressed on the Top500 list‘s explanation of Linpack, which is interesting as they use this metric to create the list.

For the TOP500, we used that version of the benchmark that allows the user to scale the size of the problem and to optimize the software in order to achieve the best performance for a given machine. This performance does not reflect the overall performance of a given system, as no single number ever can. It does, however, reflect the performance of a dedicated system for solving a dense system of linear equations.

That would be great if all of our HPC workload was dedicated to “solving a dense system of linear equations,” which it isn’t. Instead it varies widely from site to site.

In its place, Dongarra and Michael Heroux of Sandia National Laboratories are suggesting the use of a new benchmark named High Performance Conjugate Gradient (HPCG). It appears that come November, HPCG will appear as an additional column in the Top500 list. So, we aren’t getting rid of Linpack just yet.

I view this as a good thing. The more information one has about not only the systems on the Top500, but also one’s own, the better (as I’m assuming everyone at least tries these benchmarks at least once).

When I was at university finishing up my Masters program, one of the potential thesis topics I explored was how different hardware impacted HPC benchmarks in Beowulf clusters. I was a bit naïve at that point, as my understanding was still sophomoric at best. What I’ve come to understand since is the level to which applications and application mix affect a cluster. Thinking it was all about the hardware was a bit myopic.

As our community gravitates towards this world view, the better things will become in my opinion. Now, to be fair, not everyone agrees with this, nor is everyone particularly enamored with the idea of a new metric. I think this gets us into a discussion of why we even have the Top500 list.

The Purpose of the Top500 Ranking

One thing everyone always looks forward to at Super Computing and ISC is the announcement of the new rankings. Leading up to this, there is much speculation and discussion about how the list is going to change. Who is going to be #1? It’s one of those cultural symbols that has become part of our community. But, why do we have the list? What is its purpose?

I think there are several valid options:

  • It shows us our progress as a community.
  • We learn from each others’ systems.
  • We like the competition.

This is where having a new metric gets interesting. For the first two, there’s really little effect (especially if we keep Linpack around for the foreseeable future). The third item is a little trickier.

The Top500 list bestows bragging rights not only on the organizations that own the top supercomputers, but also on the countries and the hardware and software vendors who built those systems.

I should know… we are one of those vendors. Adaptive Computing‘s Moab software manages many of not only the current and past Top500 systems, but also the Top10 systems. Do we use it for bragging rights? Of course we do. I’m doing it now in a not-so-subtle way. It’s human nature.

The problem is that with another metric, it throws into doubt who really has the fastest system. Arguments are sure to ensue. But, from my point of view, those arguments are probably healthy, and we really should be mainly focusing on the first two purposes on the list with a much smaller emphasis on the last.

We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system.
~ Jack Dongarra

With us now spending millions upon millions of US dollars on these systems, let’s make sure we are moving in the right direction. We need to be matching the system to its workload.

In reality, this debate is by no means new. When GPGPU-based systems were introducing into the Top500 ranking, the age-old debate over Capacity vs. Capability was once more reignited. Today’s current debate is just another reinvention of those same arguments.

Other Suspect Metrics

Now, I brought up the Linpack vs. HPCG debate because it’s front-and-center in the HPC community. But, we are faced by many of these individual metrics daily. Marketing has no issue with trotting out the metrics that show their hardware or software piece in the best light. Again, I do not fault them for doing so. That’s their job.

Data Center

We just need to pay attention.

One metric that gets tossed around that I have an issue with is PUE (Power Usage Effectiveness). In some ways, like Linpack, it is useful for doing “Then and Now” comparisons, but using it to compare two different data centers or clusters is just plain silly.

Because it lacks any notion of the amount or importance of work being completed, it’s comparing apples to oranges. I could have a PUE rating that is .2 points better than my neighbor. But, if they are doing five times the work, I’m the one that’s likely being wasteful from an environmental point of view. I’m probably getting far less done for the amount of resources I’m using.

The metric doesn’t tell me that.

Another metric to be wary of is utilization. All by itself, utilization only tells us how many of the machines are busy. It doesn’t tell us if they are doing anything important. True optimization really has two parts:

  1. Making sure the system is utilized
  2. Making sure the most important work is done first

These two points need to be kept in balance. Now, the exact nature of that balance is site specific. It depends on the workload, the hardware and the organization objects (as well as politics, sometimes). A good scheduling and management system will provide the breadth of policies necessary to enable the correct balance.

As an aside, I have a secret fantasy of one day being able to submit a job onto one of the largest supercomputers in the world that only plays multi-dimensional Pong with itself. For some reason, I just find that to be a funny, happy thought, though I doubt the admins would share the same sentiment.

Final Thoughts

So, in conclusion, I think there are just a couple of main points I’d like to reiterate.

First of all, even though it sounds like some feel-good, self-affirmation mumbo jumbo, all of our systems are different, and that’s okay. Let’s design and build for the differences, not some magical number.

To be me is to be different…
~ Robert Fanney

Second, the more information one has the better. Single metrics really aren’t all that helpful. It’s like having an average without the standard deviation. It tells you something, but you can’t be completely sure what. Also, make sure you are using the proper context for the metrics.

Information is the oxygen of the modern age.
~ Ronald Reagan

Lastly, participate in the discussions. We need to have them as a community. If you disagree with me, that’s fine. Let’s talk about it. Together we build the HPC and supercomputing communities. All of us are needed.


Read the Sandia Report: Toward a New Metric for Ranking High Performance Computing Systems


Images courtesy of Janet Ramsden and Bob Mical.

Filed Under: HPC Tagged With: Beowulf cluster, HPC, HPCG, Jack Dongarra, Linpack, metrics, Michael Heroux, Moab, Pong, Power Usage Effectiveness, PUE, Robert Fanney, Ronald Reagan, Sandia National Laboratories, Top500

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

The Manifesto

Conscious Business Ethics

BusinessEthics300

Learn why it matters. Then sign it!

Get Updates!

Affiliate Disclosure

This site offers affiliate links to some online retailers such as Amazon.com in conjunction with hyper-linked books, movies, music and other such items. IF you click on these links and subsequently make a purchase, I will receive a small percentage of the transaction price

Twitter: trev_harmon

  • RT @saysthefox: I think everyone could use a lighthearted/happy story right now so here goes: At the beginning of the pandemic I went thro… 09:17:21 AM December 12, 2020 ReplyRetweetFavorite
  • #DoGoodRecklessly https://t.co/EjpxC0Mess 08:13:43 AM December 12, 2020 ReplyRetweetFavorite
  • 😁 https://t.co/bjYurTuVq0 10:20:50 PM December 11, 2020 ReplyRetweetFavorite
  • Even with all the craziness in the world, there’s much to be grateful for. Gratitude is good for the soul. I… https://t.co/CClHtRzMg1 11:02:29 PM November 26, 2020 ReplyRetweetFavorite
@trev_harmon

Writer, software architect, educator, blogger, photographer, would-be designer, and a believer in the power of simplicity and human-based design.

Other Blogs

Trev Harmon can also be read at:

  • Dream.Learn.Discover
    Primary Author -- This blog is about seeing the good in the world. With all the bad, evil and destruction, there are many, many people who are creating good in their sphere of influence. Some of these spheres are large and some are small. There is a time allotted to each one of us. It is with this time some decide to do remarkable things, though they may not believe them to be remarkable at the time.
  • Adaptive Computing
    Contributor -- The world of high-performance, cloud and supercomputing is opening the way for many new and exciting discoveries. As we push our quest for knowledge forward, technology will play a key role in supplementing our ability to learn and discover.

Copyright © 2021 · Executive Pro Theme on Genesis Framework · WordPress · Log in