Follow Datanami:
May 16, 2012

Floating Big Data on GPU Clouds

Nicole Hemsoth

For those who thought that graphics processing units (GPUs) were the sole domain of the gaming and scientific computing geeks, get ready for a landslide—and the one to push the mud first is NVIDIA, which today talked about democratizing high performance computing and bringing a broad, big new class of users into the GPU fold.

While just a few years ago, this might have been considered a gutsy claim from a little “gaming” company, things have certainly changed. Now, when the GPU maker behind a number of the world’s top-performing supercomputers (including the expected #1 machine for 2012) talks—folks tend to take notice.

The difference this year is that the breaking news isn’t aimed at one or two subsets of the sprawling universe of computing—it’s news that could, all bias in check, change the way we write, use and think about software, not to mention how we purchase and leverage personal hardware, but especially that which resides in datacenters.

Today during his keynote at the GPU Technology Conference, NVIDIA CEO Jen-Hsun Huang marked a significant milestone for the future of both consumer and enterprise computing—the world’s first virtualized GPU—and a shiny new platform to let users tap into the general purpose computing power of GPUs from anywhere.

In other words, starting this week, the souped-up Kepler GPU from NVIDIA might be making its way into your enterprise IT department, by way of a physical install during your next server refresh—or in virtualized form in a cloud near you.

During the event this morning, Jen-Hsun Huang described in detail its own VGX virtualization platform, which could enable a new generation of low-powered devices to be free from the constraints of hardware by tapping into hyper-efficient cloud datacenters packed full of its Kepler GPUs.

At the core of the VGX virtualization effort is a custom-crafted hypervisor layer, courtesy of a surprise announcement of a partnership with Citrix (makers of another popular open source virtualization layer, Xen). Huang says that this lets them split the GPU into the equivalent of 256 virtualized GPUs using a receiver, which, despite its old-school name looks like a desktop shortcut. To prove the power, he opened the receiver app on an iPad, which then showed a 1,516 CUDA core Windows desktop rendered perfectly.

This is big news. And it goes far beyond virtualized desktops or cloud-based gaming (although yes, that entails some awesomeness, let’s face it, especially for those sick of being tied to platforms that aren’t desirable for gaming or work). At its most basic level, this news is big for the simple fact the general consensus has been that there was no way to virtualize a GPU—that clouds could only float the unbearable lightness of the CPU.

For companies that are tackling real-time data at the massive scale, the ability to stream, collaborate, remotely review and then carry off many-a-hosted-terabyte, this offers something that a simple CPU-based cloud service can’t.

As Huang demonstrated today, it’s possible to carry around the virtual equivalent of a 40 terabyte work (in the demo’s case it was a frame of a feature film) by tapping into the power of a virtual GPU. While nowhere near capable as a bare metal use of the Kepler architecture, this is nonetheless handy for designers and as one can imagine–anyone with large datasets to display or share off-site without massive compression and slow sending.

When wrapped in the language of something as ho-hum as virtual desktops, it’s easy to miss the big picture of what this technology really means—that the real vision of the virtual desktop for designers, rendering companies and those whose business is tied to graphics has been realized.

But beyond the designers, who are no longer forced to find creative, painful ways to compress their multi-terabyte-per-frame animations down to size to transport around for viewing, there are others who should be taking note of these developments. This includes, well, pretty much everyone.

The “bring your own device” or BYOD movement has snapped up some attention as of late among enterprise execs frustrated with the marriage to dedicated machines for workplace tasks. Not only does this hinder flexibility, collaboration options, sharing of non-static concepts and so on—it reflects a trend that will happen eventually whether or not companies are ready for it: users’ work and home devices are merging.

For many in high performance computing and latency-dependent businesses like financial services, the concept of using the cloud is still far out of reach due to performance limitations. But for the many businesses with analytics operations that require high performance hardware, decent (but not earth-shattering) performance, and the ability to handle data in real-time in a shared, collaborative environment, this could change the playing field.

“With computer graphics we now reach hundreds of millions of people,” Huang said. “With cloud computing we can literally put the power of the GPU into the hands of billions of mobile users around the world.”

Product-wise, the company announced two new GPUs based on the Kepler architecture, the Tesla K10 and the Testla K20 that are designed to offer improvements over previous-generation efforts, including the company’s own Fermi architecture. This is part of what makes them datacenter-ready, according to NVIDIA, along with the unique approach to virtualization Jen-Hsun Huang says was five years in the making.

The VGX technology virtualizes the GPU so that it can be shared across a number of CPUs and threads while tapping into a custom-built memory management feature.

As Adam Shah pointed out today, “GPUs have in the past been used for virtualization. For example, Nvidia and its rival Advanced Micro Devices have offered professional graphics cards for deployment of Windows 7 virtual desktop from servers to client devices. But with VGX, now the GPU can skip CPU cycles and directly deploy and manage virtual machines.”

Datanami