Hadoop and HBase have found mass appeal over the last several months, at least as an experimental approach, if not one for performance-aware mission-critical applications. However, part of the reason some shops push off Hadoop development is because, well…it’s tough.
Then again, for those who are able to tap into the power of HBase to target real-time operations at the massive scale of some social networks, the learning curve could be worth the steep hike. According to startup Continuuity though, big data developers, even at large talent-rich companies, need abstraction from all the complexity.
It’s hard to deny that Continnuity is backed by some veteran heavy-hitters in the big data development talent pool. And VCs have picked up on that, kicking a total of $12.5 million their way in two rounds since the beginning of 2012, just in time for the height of all the Hadoop hoopla.
The company’s co-founder and CEO, Todd Papaioannou was one of the early big data gurus at Yahoo before his work at Teradata and Greenplum. And another former Yahoo HBase expert, (and one of three co-founders) Nitin Motgi was behind one of the largest HBase clusters in the world for Yahoo.
Jonathan Gray, CTO and the third co-founder, also has a rather impressive background, particularly on the HBase front. During the recent Strata Conference we talked about his roots building out the real-time HBase infrastructure at Facebook, which the social giant was able to leverage for the Facebook Messages service that rolled out in 2010.
More specifically, our discussion focused on how these development experiences influenced what was possible with little old HBase, especially for some difficult real-time problems—as well as how these collective experiences shaped the core of what Continuuity wanted to bring to the big data landscape.
Gray, like his eventual Continuuity co-founders, was one of the early adopters of Hadoop and HBase. His early startup, news feed service Streamy, was an early attempt to wrangle HBase for bigger tasks. He led his small team through a migration from Postgres and a traditional relational database setup over to Hadoop and HBase, then started to rebuild the application on top of that still very immature set of tools.
He recalled that there were a lot of big missing pieces and features, so he started to turn his work on HBase back over the community. At the time, he said all he wanted to do was drive real-time Hadoop applications. To fill in the gaps, he became an HBase core contributor before Facebook snapped him up to lead its bold HBase and real-time efforts.
By 2010, Facebook wasn’t new to Hadoop or its components either. Hive was buzzing and Gray worked to help the team to create a platform out of it so the real work on real-time could begin in earnest. “At that point, Hive was all offline analytics and at the time, Facebook was really looking at HBase to enable some of the more real-time stuff on both the analytics and real-time applications,” he told us.
The first product of this goal, Facebook Messages, was built on HBase under Gray’s lead. To put this service in context, the user side of this offered unified chat, email and SMS on a single platform—and on the Facebook end, this took a 2000-node, multi-petabyte HBase-based monolith based on a custom-created transactional system.
The team from Facebook, which was comprised of many of the original architects behind HDFS and HBase, were pushing Hive, HBase, and Hadoop overall to build high performance application on. From these projects emerged the Puma analytics engine, which powered the Google Analytics-like Facebook Insights service. To push HBase further, the team moved the service from the Hive setup that ran every few hours into a real-time platform. Again, all of this was built on HBase, which was now powering core functions at Facebook. And who said that there was no such of thing as real-time Hadoop of HBase?
Well, actually, plenty of people do. But again, it’s not simple, even though companies comprised of the platform’s founding fathers are working to make it easier. At the core of Continuuity’s belief system is that the level of abstraction for Hadoop and HBase just isn’t a good fit for most developers. At this stage, the architectures of the systems are exposed in the APIs, so to use the APIs effectively, developers have to understand the core architecture. This just doesn’t work, argues Gray, who says that big data is a lot like a kernel—even still. “Back in the relational world of SQL I understood my data and how it looked in order to write my queries—I didn’t have to understand any of the underlying depth,” he remarked.
Even with a team of seasoned Facebook engineers, big data development was difficult, but the more he was able to help “platformize” new tools and offerings, the more he was able to wick away some of the complexity for developers. He said he saw just how important it was for new technologies to emerge when you made it tangible and easier for developers to dive in without digging through muddiness.
It was this concept of taking tough technologies and turning them into consumable platforms that really changed Gray’s mind about what was possible with some of the difficult components of platforms like Hadoop. This was especially true because then (and even now) it was a scattered union of several disparate pieces that needed to harmonize to sing, even just with batch workloads. Add real-time into that puzzle box and things are further shaken up.
“To ask the developer community to learn a lot of low-level stuff just to enable a little bit of business logic is just too much,” said Gray. This was true at Facebook, but is even more pertinent of a message for all companies looking to big data as a general cure-all. No one should have to be an expert—and big data development should be made enjoyable through ease of use, he says.
The company is focused on bringing higher-level support for developers. As Gray noted, “What we’re doing is building that platform that sits on top of the infrastructure—one that is going to expose things to developers in a consumable way. This covers getting data into the system, processing it, structuring it, handling the metadata, all of it, and all through high-level APIs and reusable components.
He said that this is one thing that the open source community doesn’t do well—this building of snazzy GUIs and tooling. But without it, he argues, they wouldn’t be seeing some of the use cases that are streaming in, both from existing users of Hadoop who want a layer of abstraction over the Hadoop and HBase infrastructure to let them build quickly, and from those who are looking for a supported way to get real-time performance out of Hadoop and HBase. The latter are looking to the company’s real-time stream processing engine, BigFlow, while the existing users are looking to stop being cobblers of Hadoop’s many components.
Continuuity is gaining traction with a number of web services, from gaming companies to ad networks to the largest area of growth for the startup, consumer intelligence. The real appeal for these folks, says Gray, is that they are using HBase to let their services operate on both real-time and batch workloads within the same platform in a way that offers the scale and performance they need.
I should note that we went through all of our conversation at Strata without even mentioning the one thing the company is most known for, which is their AppFabric offering. This is Continuuity’s cloud-based platform (though it can be kept inside the firewall too) that is built on top of Hadoop open-source components. The AppFabric serves as the user’s runtime and data platform for big data applications and is the culmination of the lessons Gray shared from large-scale systems using Hadoop for things that some said would never be possible.