Language Flags

Translation Disclaimer

HPCwire Enterprise Tech HPCwire Japan
Leverage Big Data'14

October 18, 2012

How Google’s Dremel Makes Quick Work of Massive Data

The ability to process more data and the ability to process data faster are usually mutually exclusive. According to Armando Fox, professor of computer science at University of California at Berkeley, “the more you do one, the more you have to give up on the other.”

Hadoop, an open-source, batch processing platform that runs on MapReduce, is one of the main vehicles organizations are driving in the big data race.

However, Mike Olson, CEO of Cloudera, an important Hadoop-based vendor, is looking past Hadoop and toward today’s research projects. That includes one named Dremel, possibly Google’s next big innovation that combines the scale of Hadoop with the ever-increasing speed demands of the business intelligence world.

“People have done Big Data systems before,” Fox said “but before Dremel, no one had really done a system that was that big and that fast.”

The key to Dremel’s speed, according to the Google paper detailing the project, is their columnar storage. Per Google, “Dremel uses a column-striped storage representation, which enables it to read less data from secondary storage and reduce CPU cost due to cheaper compression.”

Even in MapReduce, Google suggests that a columnar approach to storage would be more efficient and not very difficult to operate. Their sample “column-striping algorithm” splits records into columns over just 22 lines of code. According to the paper, striping the files into columns represents a time reduction of almost an order of magnitude for jobs run over 3000 nodes.

Google equates that to going from running in hours to minutes. Meanwhile, those same jobs are completed another order of magnitude faster when using Dremel (going from minutes to seconds).

Of course, columnar storage isn’t the only thing driving Dremel’s speed, otherwise columnar MapReduce and Dremel would be equivalent. Google also points to the language in which queries can be made, a high-level SQL-based language which does not have to be translated into MapReduce form. “In contrast to layers such as Pig and Hive, it executes queries natively without translating them into MR jobs.”

Further, and possibly just as important, Dremel borrows its architecture from that of large-scale distributed search engines (which Google may know a thing or two about).

It should be noted that Google is intending Dremel as a complement, not a replacement, for MapReduce and Hadoop. According to the paper, Dremel is frequently used to analyze MapReduce results or serve as a test run for large scale computations. “Dremel can execute many queries over such data that would ordinarily require a sequence of MapReduce jobs, but at a fraction of the execution time.” As noted before, Dremel experimentally surpassed MapReduce by orders of magnitude.

One of Dremel’s advantages is also a potential drawback. Whenever parallel processing takes place across many nodes, in this case from one to four thousand, there will inevitably be nodes that fall behind or fail entirely. Google denotes these as “stragglers” and they can significantly increase the query response time from under a minute to several minutes. However, this problem can be eliminated if it can be determined that a vast majority (99%) of data read is acceptable versus the entire set.

Per the paper, “If trading speed against accuracy is acceptable, a query can be terminated much earlier and yet see most of the data... The bulk of a web-scale dataset can be scanned fast. Getting to the last few percent within tight time bounds is hard.”

It was unlikely that there would have to be no sacrifices made to produce a system that could analyze a large amount of data quickly. But in the long run, a small hit in accuracy may be a small price to pay if Dremel can deliver on the scale and velocity fronts.

Related Articles

Mortar Takes Aim at Hadoop Usability

Researchers Target Storage, MapReduce Interactions

Managing MapReduce Applications in a Shared Infrastructure

Share Options


» Subscribe to our weekly e-newsletter


There is 1 discussion item posted.

Submitted by makhojaye on Oct 20, 2012 @ 2:55 PM EDT

Nice. Thanks for sharing. It would be more helpful, if you can post some references and links of the papers related to Dremel Performance Benchmark results.

Post #1


Most Read Features

Most Read News

Most Read This Just In

Sponsored Whitepapers

Planning Your Dashboard Project

02/01/2014 | iDashboards

Achieve your dashboard initiative goals by paving a path for success. A strategic plan helps you focus on the right key performance indicators and ensures your dashboards are effective. Learn how your organization can excel by planning out your dashboard project with our proven step-by-step process. This informational whitepaper will outline the benefits of well-thought dashboards, simplify the dashboard planning process, help avoid implementation challenges, and assist in a establishing a post deployment strategy.

Download this Whitepaper...

Slicing the Big Data Analytics Stack

11/26/2013 | HP, Mellanox, Revolution Analytics, SAS, Teradata

This special report provides an in-depth view into a series of technical tools and capabilities that are powering the next generation of big data analytics. Used properly, these tools provide increased insight, the possibility for new discoveries, and the ability to make quantitative decisions based on actual operational intelligence.

Download this Whitepaper...

View the White Paper Library

Sponsored Multimedia

Webinar: Powering Research with Knowledge Discovery & Data Mining (KDD)

Watch this webinar and learn how to develop “future-proof” advanced computing/storage technology solutions to easily manage large, shared compute resources and very large volumes of data. Focus on the research and the application results, not system and data management.

View Multimedia

Video: Using Eureqa to Uncover Mathematical Patterns Hidden in Your Data

Eureqa is like having an army of scientists working to unravel the fundamental equations hidden deep within your data. Eureqa’s algorithms identify what’s important and what’s not, enabling you to model, predict, and optimize what you care about like never before. Watch the video and learn how Eureqa can help you discover the hidden equations in your data.

View Multimedia

More Multimedia


Job Bank

Datanami Conferences Ad

Featured Events

May 5-11, 2014
Big Data Week Atlanta
Atlanta, GA
United States

May 29-30, 2014
St. Louis, MO
United States

June 10-12, 2014
Big Data Expo
New York, NY
United States

June 18-18, 2014
Women in Advanced Computing Summit (WiAC ’14)
Philadelphia, PA
United States

June 22-26, 2014

» View/Search Events

» Post an Event