Big Data I/O Benchmark Gains Steam
The mighty canary in big data’s coalmine—the issue of IOPS–is rising in popularity as the benchmark of choice for comparing storage offerings, in part because of the increasing amount of attention these figures have been receiving. This is also driven forth because users are realizing what the true cost of their data bottlenecks are. Since time is money and big data is both, several companies are addressing the I/O issue.
Back in January, I/O drive manufacturer Fusion-io was the first to break the 1 BIOP ceiling at the DEMO: Enterprise event in San Francisco. The feat was accomplished with 64 IODrive2 Duos installed in 8 HP ProLiant DL370 Servers. Just for comparison, a single IODrive2 Duo can handle 700,000 read IOPS and 900,000 write IOPS.
Fusion-io Chief Scientist and Apple Co-Founder Steve Wozniak said of the technology “Instead of treating flash like storage, where data passes through all of the OS kernel subsystems that were built and optimized for traditional storage, our core ioMemory technology offers a platform with new programming primitives that can provide system and application developers direct access to non-volatile memory.”
Although Fusion-io was the first to break the B-IOP barrier, others in the storage market are stepping with their own unique claims to I/O fame. For instance, today flash-based storage manufacturer Virident announced, in conjunction with NEC, a system that can handle 1.2 million operations per second.
The Virident and NEC achievement was made possible with eight 1.4TB FlashMax SCM IO drives stored inside a single 80-core NEC Express5800 / A1080a “GX” server using 8k blocks. Virident’s FlashMAX drives are single PCI cards that store anywhere from 300GB to 1.4TB and claims performance higher than 1.4 million IOPS. Virident is targeting enterprise datacenters with these devices.
Increasing processor performance has always been essential to high performance computing applications, but outside of big data circles, the topic of I/O performance just doesn’t seem to get much airplay..
With Moore’s Law in mind, there’s no question everyone wants the computational equivalent of the fastest car. Still, a faster car is no guarantee that the driver will get where they want to go in less time, because even if the car gets faster in theory, an endless litany of potential barriers or slow-downs could be in the way. In this grand analogy, to make matters worse I/O speeds are like the department of transportation, which has been unable to keep up with the influx of all this new traffic…It doesn’t matter how quickly a processor works if the IO is unable to handle the demand.
This is where I/O drives come into the picture; their main objective is to deliver data performance, opening up more lanes on the road, which allows processors to get closer to their peak potential while simultaneously increasing overall bytes per second. These drives are a little unorthodox; they connect directly to the PCI-e bus in order to get the memory closer to the processor and they are not currently bootable.
The IOPS race is heating up—not just in terms of proving performance, but in terms of the type (and quantity) of vendors cropping up to address these data movement challenges. From traditional HPC storage vendors to upstarts like Virident (which one can argue walks the line between those two areas) IOPS is set to be a hot issue in 2012.