DataTorrent
Language Flags

Translation Disclaimer

HPCwire Enterprise Tech HPCwire Japan


November 03, 2012

The Week in Big Data Research


This week we are pleased to usher in a new feature that will roll out each Friday that focuses on innovative new approaches to solving data-intensive challenges from a peer-reviewed research perspective.

Drawing from academic and, to a lesser extent, industry materials that have been recently published, we will provide a way for you to stay in touch with what the research world is doing to solve some of today’s most pressing big data problems. Please note that for this section only, not all links will be available to everyone; academic logins might be required for many of the sources, but even without this, several of the journals and sources will offer the standalone articles for a nominal fee.

We will strive to point you to pieces that have just been published over the week but may include earlier pieces from the year if they are relevant to massive changes in the big data ecosystem.

But, enough of these introductions—let’s dive in with our first item, which like so many others this week, focuses on MapReduce far more than usual. A sign of wider scientific computing adoption or pure coincidence?

MapReduce Over Large History Data

A team of researchers in China has pointed to the “Internet of Things” as a source of new applications based on sensor data.

The researchers are interested in exploring how to process high speed data stream over large scale history data, which they claim presents a major challenge.

The team’s paper proposes a new programming model RTMR, which improves the real-time capability of traditional batch processing based MapReduce by preprocessing and caching, along with pipelining and localizing.

Furthermore, the team notes that to adapt the topologies to application characteristics and cluster environments, a model analysis based RTMR cluster constructing method is proposed.

The benchmark built on the urban vehicle monitoring system shows RTMR can provide the real-time capability and scalability for data stream processing over large scale data.

Next -- Mastiff Attacks Time-Based Analytics >


Mastiff: A MapReduce-based System for Time-Based Big Data Analytics

There is nice interplay between the previous piece we pointed to that targets MapReduce as the powerhouse to handle large history data.

The research team behind this project implemented a MapReduce-based system, called Mastiff, which provides a solution to achieve both high data loading speed and high query performance. Mastiff exploits a systematic combination of a column group store structure and a lightweight helper structure. Furthermore, Mastiff uses an optimized table scan method and a column-based query execution engine to boost query performance.  

There were a number of underpinnings to this effort as the team tried to address some critical shortcomings of traditional approaches. For one thing, they note that MapReduce-based warehousing systems are not specially optimized for time-based big data analysis applications. Such applications have two characteristics: 1) data are continuously generated and are required to be stored persistently for a long period of time, 2) applications usually process data in some time period so that typical queries use time-related predicates. Time-based big data analytics requires both high data loading speed and high query execution performance. However, existing systems including current MapReduce-based solutions do not solve this problem well because the two requirements are contradictory.

The team was able to show that Mastiff can significantly outperform existing systems including Hive, HadoopDB, and GridSQL.

Next -- Fast Scientific Analysis with Integrated Statistical Metadata >


Fast Scientific Analysis with Integrated Statistical Metadata

According to the two researchers who presented on this topic for a Parallel Computing Workshop from IEEE, an approach called FASM can potentially lead to a new dataset design and can have an impact on data analysis.

Jialin Liu and Yong Chen say scientific datasets, such as HDF5 and PnetCDF, have, for good reason, been used widely in many scientific applications. These data formats and libraries provide essential support for data analysis in scientific discovery and innovations.

Building on this, the team presents an approach to boost data analysis, namely Fast Analysis with Statistical Metadata (FASM), via data sub setting and integrating a small amount of statistics into datasets. We discuss how the FASM can improve data analysis performance. It is currently evaluated with the PnetCDF on synthetic and real data, but can also be implemented in other libraries.

Next -- How Does MapReduce Perform on Private Clouds? >


MapReduce Performance Evaluation on a Private Cloud

Justin Shi and Moussa Taifi recently exploredtThe convergence of accessible cloud computing resources and big data trends have introduced unprecedented opportunities for scientific computing and discovery.

According to the researchers, HPC cloud users face many challenges when selecting valid HPC configurations. In the paper, the duo reports a set of performance evaluations of data intensive benchmarks on a private HPC cloud to help with the selection of such configurations. More precisely, they focus the study on the effect of virtual machines core-count on the performance of 3 benchmarks widely used by the MapReduce community.

Generally speaking, the researchers say that depending on the computation to communication ratios of the studied applications, using higher core-counts virtual machines do not always lead to higher performance for data-intensive applications.

Related Articles

Researchers Schooled in Big Data Management

Yale Computer Scientists to Explore Big Data Developments

Researchers Target Storage, MapReduce Interactions

Research Aims to Automate the Impossible

Share Options


Subscribe

» Subscribe to our weekly e-newsletter


Discussion

There are 0 discussion items posted.

 

Most Read Features

Most Read News

Most Read This Just In



Sponsored Whitepapers

Planning Your Dashboard Project

02/01/2014 | iDashboards

Achieve your dashboard initiative goals by paving a path for success. A strategic plan helps you focus on the right key performance indicators and ensures your dashboards are effective. Learn how your organization can excel by planning out your dashboard project with our proven step-by-step process. This informational whitepaper will outline the benefits of well-thought dashboards, simplify the dashboard planning process, help avoid implementation challenges, and assist in a establishing a post deployment strategy.

Download this Whitepaper...

Slicing the Big Data Analytics Stack

11/26/2013 | HP, Mellanox, Revolution Analytics, SAS, Teradata

This special report provides an in-depth view into a series of technical tools and capabilities that are powering the next generation of big data analytics. Used properly, these tools provide increased insight, the possibility for new discoveries, and the ability to make quantitative decisions based on actual operational intelligence.

Download this Whitepaper...

View the White Paper Library

Sponsored Multimedia

Webinar: Powering Research with Knowledge Discovery & Data Mining (KDD)

Watch this webinar and learn how to develop “future-proof” advanced computing/storage technology solutions to easily manage large, shared compute resources and very large volumes of data. Focus on the research and the application results, not system and data management.

View Multimedia

Video: Using Eureqa to Uncover Mathematical Patterns Hidden in Your Data

Eureqa is like having an army of scientists working to unravel the fundamental equations hidden deep within your data. Eureqa’s algorithms identify what’s important and what’s not, enabling you to model, predict, and optimize what you care about like never before. Watch the video and learn how Eureqa can help you discover the hidden equations in your data.

View Multimedia

More Multimedia

NVIDIA

Job Bank

Datanami Conferences Ad

Featured Events

May 5-11, 2014
Big Data Week Atlanta
Atlanta, GA
United States

May 29-30, 2014
StampedeCon
St. Louis, MO
United States

June 10-12, 2014
Big Data Expo
New York, NY
United States

June 18-18, 2014
Women in Advanced Computing Summit (WiAC ’14)
Philadelphia, PA
United States

June 22-26, 2014
ISC'14
Leipzig
Germany

» View/Search Events

» Post an Event