Follow Datanami:
November 3, 2012

The Week in Big Data Research

Datanami Staff

This week we are pleased to usher in a new feature that will roll out each Friday that focuses on innovative new approaches to solving data-intensive challenges from a peer-reviewed research perspective.

Drawing from academic and, to a lesser extent, industry materials that have been recently published, we will provide a way for you to stay in touch with what the research world is doing to solve some of today’s most pressing big data problems. Please note that for this section only, not all links will be available to everyone; academic logins might be required for many of the sources, but even without this, several of the journals and sources will offer the standalone articles for a nominal fee.

We will strive to point you to pieces that have just been published over the week but may include earlier pieces from the year if they are relevant to massive changes in the big data ecosystem.

But, enough of these introductions—let’s dive in with our first item, which like so many others this week, focuses on MapReduce far more than usual. A sign of wider scientific computing adoption or pure coincidence?

MapReduce Over Large History Data

A team of researchers in China has pointed to the “Internet of Things” as a source of new applications based on sensor data.

The researchers are interested in exploring how to process high speed data stream over large scale history data, which they claim presents a major challenge.

The team’s paper proposes a new programming model RTMR, which improves the real-time capability of traditional batch processing based MapReduce by preprocessing and caching, along with pipelining and localizing.

Furthermore, the team notes that to adapt the topologies to application characteristics and cluster environments, a model analysis based RTMR cluster constructing method is proposed.

The benchmark built on the urban vehicle monitoring system shows RTMR can provide the real-time capability and scalability for data stream processing over large scale data.

Next — Mastiff Attacks Time-Based Analytics >


Mastiff: A MapReduce-based System for Time-Based Big Data Analytics

There is nice interplay between the previous piece we pointed to that targets MapReduce as the powerhouse to handle large history data.

The research team behind this project implemented a MapReduce-based system, called Mastiff, which provides a solution to achieve both high data loading speed and high query performance. Mastiff exploits a systematic combination of a column group store structure and a lightweight helper structure. Furthermore, Mastiff uses an optimized table scan method and a column-based query execution engine to boost query performance.  

There were a number of underpinnings to this effort as the team tried to address some critical shortcomings of traditional approaches. For one thing, they note that MapReduce-based warehousing systems are not specially optimized for time-based big data analysis applications. Such applications have two characteristics: 1) data are continuously generated and are required to be stored persistently for a long period of time, 2) applications usually process data in some time period so that typical queries use time-related predicates. Time-based big data analytics requires both high data loading speed and high query execution performance. However, existing systems including current MapReduce-based solutions do not solve this problem well because the two requirements are contradictory.

The team was able to show that Mastiff can significantly outperform existing systems including Hive, HadoopDB, and GridSQL.

Next — Fast Scientific Analysis with Integrated Statistical Metadata >


Fast Scientific Analysis with Integrated Statistical Metadata

According to the two researchers who presented on this topic for a Parallel Computing Workshop from IEEE, an approach called FASM can potentially lead to a new dataset design and can have an impact on data analysis.

Jialin Liu and Yong Chen say scientific datasets, such as HDF5 and PnetCDF, have, for good reason, been used widely in many scientific applications. These data formats and libraries provide essential support for data analysis in scientific discovery and innovations.

Building on this, the team presents an approach to boost data analysis, namely Fast Analysis with Statistical Metadata (FASM), via data sub setting and integrating a small amount of statistics into datasets. We discuss how the FASM can improve data analysis performance. It is currently evaluated with the PnetCDF on synthetic and real data, but can also be implemented in other libraries.

Next — How Does MapReduce Perform on Private Clouds? >


MapReduce Performance Evaluation on a Private Cloud

Justin Shi and Moussa Taifi recently exploredtThe convergence of accessible cloud computing resources and big data trends have introduced unprecedented opportunities for scientific computing and discovery.

According to the researchers, HPC cloud users face many challenges when selecting valid HPC configurations. In the paper, the duo reports a set of performance evaluations of data intensive benchmarks on a private HPC cloud to help with the selection of such configurations. More precisely, they focus the study on the effect of virtual machines core-count on the performance of 3 benchmarks widely used by the MapReduce community.

Generally speaking, the researchers say that depending on the computation to communication ratios of the studied applications, using higher core-counts virtual machines do not always lead to higher performance for data-intensive applications.

Related Articles

Researchers Schooled in Big Data Management

Yale Computer Scientists to Explore Big Data Developments

Researchers Target Storage, MapReduce Interactions

Research Aims to Automate the Impossible

Datanami