Follow Datanami:
November 19, 2013

Cray Introduces Big Data Framework

SEATTLE, Wash., Nov. 18 — Global supercomputer leader Cray Inc. today announced a new “Big Data” framework that gives Cray customers the ability to more easily implement and run Apache Hadoop on their Cray XC30 supercomputers. Fusing the benefits of supercomputing and Big Data, the Cray Framework for Hadoop package improves the overall efficiency and performance for Cray XC30 customers deploying Hadoop in Scientific Big Data environments.

The Cray Framework for Hadoop package includes documented best practices and performance enhancements designed to optimize Hadoop for the Cray XC30 line of supercomputers. Built with an emphasis on Scientific Big Data usage, the framework provides enhanced support for data sets found in scientific and engineering applications, as well as improved support for multi-purpose environments, where organizations are looking to align their scientific, compute and data-intensive workloads. This enables users to gain the utility of the Java-based MapReduce Hadoop programming model on the Cray XC30 system, complementing the proven, HPC-optimized languages and tools of the Cray Programming Environment.

“We are seeing increased interest from organizations that are ready to leverage Hadoop for analytics in scientific environments, but what those organizations are finding is that Hadoop does not meet all of their needs for scientific use cases,” said Bill Blake, senior vice president and CTO of Cray. “They find Hadoop isn’t optimized for the large hierarchical file formats and I/O libraries needed by scientific applications that run close to the parallel file systems, or leverage the types of fast interconnects and tightly integrated systems deployed on supercomputers for performance and scalability. And they find it difficult to share infrastructure or manage complex workflows than span both scientific compute and analytics workloads, and being able to integrate math models with data models in a single high-performance environment. With the Cray Framework for Hadoop, we’re helping organizations harness the openness and power of Hadoop, while better leveraging the investments and progress they’ve made in scientific computing.”

Based on early customer response, the initial release of the Cray Framework for Hadoop and an optimized Cray Performance Pack for Hadoop will be available as free downloads and include validated and documented best practices for Apache Hadoop configurations. This performance pack includes Lustre-Aware Shuffle to optimize Hadoop performance on the Cray XC30 supercomputer. Further performance enhancements to the performance pack, which will include a native Lustre file system library and a plug-in to further accelerate Hadoop performance using the Aries system interconnect, will be available in the first half of 2014.

The launch of Cray Framework for Hadoop further expands Cray’s portfolio of offerings for the rapidly growing Big Data market. Cray’s complementary array of Big Data solutions includes fast data and data movement capabilities with Cray Sonexion storage systems, a tiered storage and archiving solution with Cray Tiered Adaptive Storage, data discovery using the YarcData uRiKA appliance, Cray Cluster Supercomputers for Hadoop, and now a growing framework of enhancements and optimizations for running Hadoop workloads on Cray CX30 supercomputers.

Datanami