Follow Datanami:
December 5, 2011

Graphing Genius and Giants

Cydney Stevens

Steve Wallach is a recognizable “character” in the hallowed halls of high-performance computing and an acknowledged innovator in our time. He is a name of names on a very small and exclusive list of genius and giants.

In 2010, he was recognized by HPCwire as a “Person to Watch” for his contributions to hybrid computing architectures. This follows his receiving the Seymour Cray Award for his overall contributions to high-performance computing through the design of innovative vector and parallel computing systems and for his distinguished industrial career and acts of public service 2008.

What makes Wallach intriguing is his insatiable pursuit of making computing architectures “interesting” and his spirited willingness to defy the norms and perils of being a technology innovator.     In the current era, he takes technology and makes it “compelling” beyond its standard design and very base capabilities.When the market has centered most everything on the same basics, finding something different is as refreshing as it is inspiring. Wallach has a natural propensity to design ingenious system architectures that are as much about providing value based performance as they are sophisticated and elegant in their design.

Naturally, at SC11 this year, Wallach and his Convey team were a great point of interest to attendees in terms of what they will be showing and revealing at the Seattle conference. The “can do!” spirit of Wallach and his team extends through the company’s core values and mission statement reflected through three simple promises; to make it easy for programmers, developers and system administrators to use their product and be their most enthusiastic fans; to design and build the most reliable and cost-effective system humanly possible; and to radically change the landscape and put performance back into high-performance computing

“The fastest growing application space is data-intensive computing. Why? Because it is part of ‘everything is online and accessible.’ Data-intensive computing is mining an unstructured database or sequencing a gene. These database sizes are huge ranging from gigabytes to petabytes. Architectures optimized for this space must first have a very high-speed memory system optimized for random memory references. We find ‘big data’ architecturally exciting,” says Wallach.

And while many are familiar with Wallach’s impressive list of accomplishments and current pursuits as chief scientist and co-founder of Convey Computer what will be interesting to watch over time is what he does as a result of an emerging new system measurement test suite, focused on data-intensive supercomputing applications, published as the Graph 500.  A fundamental mission he understands well and one that is right up his innovators alley. “For data-intensive computing we need a benchmark such as the TOP500,” explains Wallach. “The Graph 500 benchmark typifies the data-intensive computing environment where memory references are random in nature, fine grain synchronization is needed for thread support, and the compute aspect is dominated by compares and branches. When exascale performance is achieved by 2020, the fastest systems on the planet will be measured by the suite of Graph 500 benchmarks.”

On the IT side of the house, similarly to the research side of the house, following what “interests” the innovators like Wallach and his interest in emerging activities such as the Graph 500 is important.  These interests can be a great “tell” in what the future may hold from subtle changes in the industry to the continued advancement in system architectures and supercomputers. Indeed to put performance back in high-performance computing and super in supercomputing.

Datanami