Follow Datanami:
January 26, 2012

Fusion-io Flashes the Future of Storage

Nicole Hemsoth

It has been quite a year for Fusion-io, the Salt Lake City-based purveyor of a new type of storage memory platform that moves time-critical data closer to the CPU for more rapid processing.

At the start of the new year, the company made a few key predictions about how the storage and big data markets will evolve—noting a rise in the need for real-time capabilities, the need to focus on “small data” and the evolution of the LAMP stack with the addition of clouds. If these turn out to be correct, this could be another stellar year for the rapidly-growing company.

Then again, there was little doubt that company would grow given its position at the top of “startups most likely to succeed” lists from Business Week and others over the last few years. Some of the initial attention around the company was in part due to Apple co-founder, Steve Wozniak’s choice to come on board with Fusion-io as chief scientist. These days, however, the solid state storage company is standing on the merit of its own technology with big customer wins, including Lawrence Livermore National Laboratory and a long list of web-based retail outfits.

They were a very visible presence during the largest show for the supercomputing community (SC ’12) this past year in Seattle and talked with us at length about both their high performance computing and enterprise strategies—as well as both of those mesh together against their approach to storage memory. Fusion-io product manager, Vincent Brisebois provided an overview of the company’s focus for its supercomputing audience for us:

This week, the company released second quarter 2012 financial results, revealing a revenue boom of over the last calendar year. Fusion-io grew 169%–from $31.2 million to $84.1 million over the course of the last year.

As the company’s CEO and co-founder, David Flynn, said following this event, “Rethinking how to provide powerful modern CPUs with the data they need through sophisticated software architectures has enabled us to deliver the ultra low latency performance needed to achieve one billion IOPS with existing hardware and Fusion ioMemory solutions.”

He continued, noting that “This breakthrough is not something that could be achieved with hardware alone. Intelligent software that optimizes NAND flash as a low latency, high-capacity, non-volatile memory solution for enterprise servers can transform the way organizations process the immense amounts of data that powers our lives today.”

The company’s pitch hinges this movement to the CPU, which they call “shared data decentralization.” The approach seems to be winning some converts in both research and enterprise high performance computing environments—both of which often require low latency matched against massive datasets.

In essence, their storage memory platform addresses what Fusion-io defines as a “data supply problem” via a combined hardware an software approach. During the annual Supercomputing Conference (SC ’12) in Seattle this past November, the company was on a soapbox against traditional mechanical storage architectures, which they say can’t keep up with new processor technology. As the company claims, “despite major advances in technology, data processing was limited to the speed of legacy storage infrastructures. Without a solution designed to maximize their capabilities, modern CPUs sit idle while they wait for the data to process.”

In turn, their platform hands over native access to data to the servers to speed application and database operations, a fact they say can dramatically reduce storage cost in terms of both TCO and usage/energy consumption.

To put all this in context, earlier this month Fusion-io announced that they hit the one billion IOPS mark during a preview of their Auto Commit Memory (ACM) extension, which is part of their core product, the ioMemory subsystem. They also released news about how they assisted Lawrence Livermore National Laboratory capture ranking in the June Graph500 list and improve the efficiency of its supercomputers.

Kraken, LLNL’s heavy-hitting supercomputer, is based on a single four socket server.  While other systems that ranked on the coveted supercomputing list had hundreds and thousands of nodes, Kraken delivered a scale 34 (its rating) from a single node. The lab’s new superstar super, Leviathan packs 68,719,000,000 nodes in a four socket Intel server  (four times the size previously attained).  According to Fusion-io, “Leviathan can traverse 52.796 million edges per second. The problem scale rivals the June submission of “Franklin,” a 4000 node Cray system at NERSC. “

According to Christian Shrauder, who oversees Federal Sales Engineering at Fusion-io, “while most supercomputers will distribute the graph into DRAM on hundreds or thousands of nodes, Leviathan relies heavily on memory that is 10x more dense than DRAM.” He says that by using ioMemory (Fusion ioDrive Duos) the super was able to pack 12TiBs of NAND flash onto one server.

Shrauder says that with the high capacity, low latency ioMemory, large graphs can be processed on a system that is orders of magnitude cheaper and denser than the alternative methods.  As he says, “Imagine the initial investment cost savings of a multi-hundred node supercomputer compared to a single four socket server loaded with ioMemory.  A supercomputer can easily tip the scales with a multimillion dollar price tag, whereas a single server would cost just over $200,000. Then consider the yearly power and cooling cost difference between the two systems. Then consider if every scientist had access to a Leviathan at their desk and how much more productive they could be.”

According to Fusion-io CFO, Dennis Wolf, the next year could echo similar results as the company transitions to the next-gen ioDrive2 product. But financial success aside, it’s worth noting that Fusion-io has made some rather eye-catching news in the last few months.

Datanami