Language Flags

Translation Disclaimer

HPCwire Enterprise Tech HPCwire Japan
Leverage Big Data'14

March 09, 2013

How Facebook Fed Big Data Continuuity

Hadoop and HBase have found mass appeal over the last several months, at least as an experimental approach, if not one for performance-aware mission-critical applications. However, part of the reason some shops push off Hadoop development is because, well…it’s tough.

Then again, for those who are able to tap into the power of HBase to target real-time operations at the massive scale of some social networks, the learning curve could be worth the steep hike. According to startup Continuuity though, big data developers, even at large talent-rich companies, need abstraction from all the complexity.

It’s hard to deny that Continnuity is backed by some veteran heavy-hitters in the big data development talent pool. And VCs have picked up on that, kicking a total of $12.5 million their way in two rounds since the beginning of 2012, just in time for the height of all the Hadoop hoopla.

The company’s co-founder and CEO, Todd Papaioannou was one of the early big data gurus at Yahoo before his work at Teradata and Greenplum. And another former Yahoo HBase expert, (and one of three co-founders) Nitin Motgi was behind one of the largest HBase clusters in the world for Yahoo.

Jonathan Gray, CTO and the third co-founder, also has a rather impressive background, particularly on the HBase front. During the recent Strata Conference we talked about his roots building out the real-time HBase infrastructure at Facebook, which the social giant was able to leverage for the Facebook Messages service that rolled out in 2010.

More specifically, our discussion focused on how these development experiences influenced what was possible with little old HBase, especially for some difficult real-time problems—as well as how these collective experiences shaped the core of what Continuuity wanted to bring to the big data landscape.

Gray, like his eventual Continuuity co-founders, was one of the early adopters of Hadoop and HBase. His early startup, news feed service Streamy, was an early attempt to wrangle HBase for bigger tasks. He led his small team through a migration from Postgres and a traditional relational database setup over to Hadoop and HBase, then started to rebuild the application on top of that still very immature set of tools.

He recalled that there were a lot of big missing pieces and features, so he started to turn his work on HBase back over the community. At the time, he said all he wanted to do was drive real-time Hadoop applications. To fill in the gaps, he became an HBase core contributor before Facebook snapped him up to lead its bold HBase and real-time efforts.

By 2010, Facebook wasn’t new to Hadoop or its components either. Hive was buzzing and Gray worked to help the team to create a platform out of it so the real work on real-time could begin in earnest. “At that point, Hive was all offline analytics and at the time, Facebook was really looking at HBase to enable some of the more real-time stuff on both the analytics and real-time applications,” he told us.

The first product of this goal, Facebook Messages, was built on HBase under Gray’s lead. To put this service in context, the user side of this offered unified chat, email and SMS on a single platform—and on the Facebook end, this took a 2000-node, multi-petabyte HBase-based monolith based on a custom-created transactional system.

The team from Facebook, which was comprised of many of the original architects behind HDFS and HBase, were pushing Hive, HBase, and Hadoop overall to build high performance application on. From these projects emerged the Puma analytics engine, which powered the Google Analytics-like Facebook Insights service. To push HBase further, the team moved the service from the Hive setup that ran every few hours into a real-time platform. Again, all of this was built on HBase, which was now powering core functions at Facebook. And who said that there was no such of thing as real-time Hadoop of HBase?

Well, actually, plenty of people do. But again, it’s not simple, even though companies comprised of the platform’s founding fathers are working to make it easier. At the core of Continuuity’s belief system is that the level of abstraction for Hadoop and HBase just isn’t a good fit for most developers. At this stage, the architectures of the systems are exposed in the APIs, so to use the APIs effectively, developers have to understand the core architecture. This just doesn’t work, argues Gray, who says that big data is a lot like a kernel—even still. “Back in the relational world of SQL I understood my data and how it looked in order to write my queries—I didn’t have to understand any of the underlying depth,” he remarked.

Even with a team of seasoned Facebook engineers, big data development was difficult, but the more he was able to help “platformize” new tools and offerings, the more he was able to wick away some of the complexity for developers. He said he saw just how important it was for new technologies to emerge when you made it tangible and easier for developers to dive in without digging through muddiness.

It was this concept of taking tough technologies and turning them into consumable platforms that really changed Gray’s mind about what was possible with some of the difficult components of platforms like Hadoop. This was especially true because then (and even now) it was a scattered union of several disparate pieces that needed to harmonize to sing, even just with batch workloads. Add real-time into that puzzle box and things are further shaken up.

 “To ask the developer community to learn a lot of low-level stuff just to enable a little bit of business logic is just too much,” said Gray. This was true at Facebook, but is even more pertinent of a message for all companies looking to big data as a general cure-all. No one should have to be an expert—and big data development should be made enjoyable through ease of use, he says.

The company is focused on bringing higher-level support for developers. As Gray noted, “What we’re doing is building that platform that sits on top of the infrastructure—one that is going to expose things to developers in a consumable way. This covers getting data into the system, processing it, structuring it, handling the metadata, all of it, and all through high-level APIs and reusable components.

He said that this is one thing that the open source community doesn’t do well—this building of snazzy GUIs and tooling. But without it, he argues, they wouldn’t be seeing some of the use cases that are streaming in, both from existing users of Hadoop who want a layer of abstraction over the Hadoop and HBase infrastructure to let them build quickly, and from those who are looking for a supported way to get real-time performance out of Hadoop and HBase. The latter are looking to the company’s real-time stream processing engine, BigFlow, while the existing users are looking to stop being cobblers of Hadoop’s many components.

Continuuity is gaining traction with a number of web services, from gaming companies to ad networks to the largest area of growth for the startup, consumer intelligence. The real appeal for these folks, says Gray, is that they are using HBase to let their services operate on both real-time and batch workloads within the same platform in a way that offers the scale and performance they need.

I should note that we went through all of our conversation at Strata without even mentioning the one thing the company is most known for, which is their AppFabric offering. This is Continuuity’s cloud-based platform (though it can be kept inside the firewall too) that is built on top of Hadoop open-source components. The AppFabric serves as the user’s runtime and data platform for big data applications and is the culmination of the lessons Gray shared from large-scale systems using Hadoop for things that some said would never be possible.

Related Articles

Is Hadoop All Grown Up Yet?

Six Super-Scale Hadoop Deployments

Cloudera CTO Reflects on Hadoop Underpinnings

Cloudera Runs Real-Time with Impala

MapR Traces New Routes for HBase

Hortonworks Dishes on Distro Differences

Share Options


» Subscribe to our weekly e-newsletter


There are 0 discussion items posted.


Most Read Features

Most Read News

Most Read This Just In

Sponsored Whitepapers

Planning Your Dashboard Project

02/01/2014 | iDashboards

Achieve your dashboard initiative goals by paving a path for success. A strategic plan helps you focus on the right key performance indicators and ensures your dashboards are effective. Learn how your organization can excel by planning out your dashboard project with our proven step-by-step process. This informational whitepaper will outline the benefits of well-thought dashboards, simplify the dashboard planning process, help avoid implementation challenges, and assist in a establishing a post deployment strategy.

Download this Whitepaper...

Slicing the Big Data Analytics Stack

11/26/2013 | HP, Mellanox, Revolution Analytics, SAS, Teradata

This special report provides an in-depth view into a series of technical tools and capabilities that are powering the next generation of big data analytics. Used properly, these tools provide increased insight, the possibility for new discoveries, and the ability to make quantitative decisions based on actual operational intelligence.

Download this Whitepaper...

View the White Paper Library

Sponsored Multimedia

Webinar: Powering Research with Knowledge Discovery & Data Mining (KDD)

Watch this webinar and learn how to develop “future-proof” advanced computing/storage technology solutions to easily manage large, shared compute resources and very large volumes of data. Focus on the research and the application results, not system and data management.

View Multimedia

Video: Using Eureqa to Uncover Mathematical Patterns Hidden in Your Data

Eureqa is like having an army of scientists working to unravel the fundamental equations hidden deep within your data. Eureqa’s algorithms identify what’s important and what’s not, enabling you to model, predict, and optimize what you care about like never before. Watch the video and learn how Eureqa can help you discover the hidden equations in your data.

View Multimedia

More Multimedia


Job Bank

Datanami Conferences Ad

Featured Events

May 5-11, 2014
Big Data Week Atlanta
Atlanta, GA
United States

May 29-30, 2014
St. Louis, MO
United States

June 10-12, 2014
Big Data Expo
New York, NY
United States

June 18-18, 2014
Women in Advanced Computing Summit (WiAC ’14)
Philadelphia, PA
United States

June 22-26, 2014

» View/Search Events

» Post an Event