Follow Datanami:
October 22, 2013

SGI Aims to Carve Space in Commodity Big Data Market

Isaac Lopez

Commodity hardware has largely been the palette on which big data implementations have been painted, with Google’s early 2000s mantra of it being “fast and cheap” becoming a near cultural staple of the emerging big data trend. Recently, non-commodity, high performance computing players are seeing an opportunity to add their canvasses to the mix. In the latest example, supercomputing icon Silicon Graphics International (SGI) has released a new suite of big data tools aimed at bringing its brand of high performance computing to big data workloads.

Central to SGI’s new big data offering is its InfiniteData cluster, which are clusters preconfigured to run with Cloudera’s distribution of Apache Hadoop on Red Hat Linux. The stats for the offering show quite a bit of performance density. Utilizing Intel’s Xeon E5-2600 v2, and holding up to twelve 4TB drives per tray, SGI says that its InfiniteData clusters deliver a 1:1 core to storage spindle ratio to optimize Hadoop workloads. On the storage front, InfiniteData offers up to 40 nodes and 1.9PB in a single rack.

For those used to rigors of the commodity racket, InfiniteData is not your typical “build it yourself” big data kit. The InfiniteData cluster units come fully assembled and tested, ready to be plugged in and integrated into the organization’s IT infrastructure within days, rather than weeks or months. SGI says that the idea behind its InfiniteData cluster is to address analytics using two principle metrics: data analysis and time to value.

Interestingly, to give big data developers the chance to take an InfiniteData cluster out for a drive around the block, SGI is offering the SGI “Sandbox” for Hadoop, a test drive version of InfiniteData available online. It’s a unique approach to selling a Hadoop cluster solution, which traditionally limited to the standard Lego “use it once you build it” approach with standard commodity blocks. This new test drive utility is targeted to be available at the end of the year.

Adding to its big data portfolio, SGI says it has integrated its OEM of Scality’s RING software with the SGI Modular Infinite Storage Server hardware to create a unique new object store, aptly named SGI ObjectStore. RING is a software-based storage solution which uses a distributed, shared-nothing architecture with no single point of failure. According to SGI, the RING peer to peer architecture allows SGI’s storage environment to overcome the strain that massive data volumes place on conventional storage file systems.

Per SGI:

“The ObjectStore system architecture delivers a shared storage pool supporting thousands to millions of users, with limitless file size and quantity, and performance rivaling block-based storage. Nodes can be added at any time to increase capacity – without interrupting users or performance levels. Advanced erasure code technology reduces risk of data loss in the event of node or drive failure to effectively zero. And the environment self-heals with no RAID rebuilds, raising service levels. Delivering up to 2.8PB per rack, SGI’s ObjectStore solution can also save enterprise environments 40 percent or more in capital and operating expenses, even in comparison to public cloud storage.”

Lastly, the company says that they are introducing a special “appliance edition” of the SGI LiveArc digital asset management software platform to help companies manage their data archives. The software automatically indexes metadata and file content as infrequently accessed data is moved into the InfiniteStorage Gateway, thus enabling a variety of valuable capabilities, such as free text queries and audit trails.

Obviously, when we’re talking about a non-commodity approach to big data, cost is the question. An SGI representative told Datanami that the InfiniteData cluster solutions start at $250,000.

Related items:

Teradata Moving to the Cloud 
In-Memory Data Grid Key to TIBCO’s Strategy
Glassbeam SCALAR Set to Challenge Splunk 

Datanami