Follow Datanami:
November 20, 2013

Cray Greases Rails for Hadoop on a Supercomputer

Isaac Lopez

Supercomputer company Cray has joined the Hadoop parade with a new framework that it says will give customers the ability to more easily implement and run Apache Hadoop using Cray’s XC30 supercomputers.

The new framework, which Cray is targeting towards customers deploying Hadoop in scientific environments, will marry Intel’s Hadoop distribution with Cray’s most advanced HPC systems. The move, says Cray, addresses a demand in the scientific computing space for Hadoop to bend towards the needs of the scientific environment.

“We are seeing increased interest from organizations that are ready to leverage Hadoop for analytics in scientific environments, but what those organizations are finding is that Hadoop does not meet all of their needs for scientific use cases,” said Bill Blake, senior vice president and CTO of Cray in a statement.

“They find Hadoop isn’t optimized for the large hierarchical file formats and I/O libraries needed by scientific applications that run close to the parallel file systems,” he continues, “or leverage the types of fast interconnects and tightly integrated systems deployed on supercomputers for performance and scalability. And they find it difficult to share infrastructure or manage complex workflows than span both scientific compute and analytics workloads, and being able to integrate math models with data models in a single high-performance environment. With the Cray Framework for Hadoop, we’re helping organizations harness the openness and power of Hadoop, while better leveraging the investments and progress they’ve made in scientific computing.”

The new framework is said to address the overall efficiency and performance needs of Hadoop in the scientific environment. Per Cray:

The Cray Framework for Hadoop package includes documented best practices and performance enhancements designed to optimize Hadoop for the Cray XC30 line of supercomputers. Built with an emphasis on Scientific Big Data usage, the framework provides enhanced support for data sets found in scientific and engineering applications, as well as improved support for multi-purpose environments, where organizations are looking to align their scientific, compute and data-intensive workloads. This enables users to gain the utility of the Java-based MapReduce Hadoop programming model on the Cray XC30 system, complementing the proven, HPC-optimized languages and tools of the Cray Programming Environment.

The new Hadoop framework announcement comes in conjunction with the news that Cray has signed a $30 million contract with the University of Stuttgart to expand their XC30 supercomputer, nicknamed “Hornet” at the University’s High Performance Computing Center Stuttgart (HLRS). HLRS started a project this past September, called “Dreamcloud,” which appears aimed at Hadoop in which they say they aim to develop “novel load balancing mechanisms that can be applied during runtime in a wide range of parallel and high performance computing systems.”

Per HLRS:

The well-established HPC schedulers, such as a Portable Batch System (PBS), offer effective in terms of the offered scheduling features algorithms and techniques to manage the execution of computational tasks, i.e., in the HPC terminology – batch jobs, on distributed compute nodes. However, with the emergence of high-level e-Infrastructures, such as Grid and Cloud, the traditional cluster scheduling techniques have proved useful to a limited extent only. The main reason for this is that applications running on those infrastructures require a job scheduler to offer a much more extensive set of features in terms of scalability, fault tolerance, and usability, which the traditional, static (with regard to the application) scheduling techniques are not able to meet. The execution frameworks of new-generation parallel applications, such as Hadoop/MapReduce, require the underlying infrastructure scheduler to be more interactive with regard to the applications, in order to enable more intelligent allocation of resources within and also beyond a batch job, i.e., the property of dynamism.

Cray’s new Hadoop-friendly framework appears to provide support in this regard.

Cray says that the framework, which will contain validated and documented best practices for Apache Hadoop configurations, is available as a free download.

Related items:

Graph Analytics Appliance Enables Personalized Medicine 

LLNL Introduces Big Data Catalyst 

SGI Aims to Carve Space in Commodity Big Data Market 

Datanami