
Druid Summons Strength in Real-Time
This has indeed been the year of Hadoop as the most appropriate synonym for big data, but according to some who hover on the fringes of that ecosystem (and even some who are in the middle of it), a lot of the tech behind it has been fetishized to the point of not being useful or comprehensible to actual business users.
From the Hortonworks exec who told us that many businesses are confused about where Hadoop belongs in their enterprise strategy, to Mike Driscoll, CEO of Metamarkets, who told us this week that there is no compelling message or context for enterprise technology folks, the next big thing for big data might simply be a long string of ease of use (versus further complicated functionality) for technology executives.
At the heart of some of this usability with added capability or performance goal sits the almighty cloud. By wicking away the hardware headache and management hassles, companies like Driscoll’s can add features that might boost speed without requiring a bevy of new hires.
Metamarkets focuses distinctly on what it calls “web-scale” companies, which for their business tends to include a number of large-scale digital publishing companies, including Financial Times as well as several higher-end online advertising platform vendors.
In an effort to expand their “real-time” capabilities, the company architected a streaming data store component to its cloud-delivered analytics platform called Druid, which was recently set free to spread its wings in the open source community. This data store, which was named after a shape-shifting comic character, could address some of the challenges of traditional database approaches—or at the very least provide the open source community with something new to chew on.
On that note, it’s important to define what’s really meant by real-time here. For an area like high-frequency trading, for instance, no amount of fiddling with Druid could make it battle-ready. However, since Driscoll defines real-time as less than a thousand milliseconds (via cloud delivery even), he claims for most use cases in their bread and butter kitchen (ad platforms) this is certainly fast enough.
The company says that for the primary markets it serves, namely large-scale web publishing and digital advertising, the stack it’s built to run on (large non-tricked-out Amazon EC2 instance types) already leverages a number of open source projects for processing, querying and visualizing high volume streaming data. However, as Driscoll described for us, Druid offered up something they weren’t able to find elsewhere—the ability to stream data via an in-memory approach using a column-based data store for lower latency query response times.
Metamarkets’ pitch with Druid is that Hadoop is not the cure-all to big data woes. He says that while it’s an excellent approach for massive data, the time it takes to chew through queries is too long. When asked about how this might be solved with other approaches, including Impala, the new “real time” Hadoop system developed by Cloudera, he said that is great for making queries fast but misses the mark when it comes to interactivity of data.
As Driscoll put it, a real-time Hadoop tool like Impala can address the speed concerns around issuing a query to a Hadoop system, but that’s only one aspect of what is important to companies who are actually going to make use of a faster, more efficient Hadoop. “Speed is just one part of it; you can launch all kinds of MapReduce processes, and while the response might be fast, the matter of latency between when an event happens and when you know about it is the second critical component there.”
Part of what makes Druid noteworthy is that as events come into the Metamarkets open source data store, they are immediately accessible for querying. Therefore, as Driscoll describes, “anything that has a Hadoop-backed architecture can always have something built on top like a caching layer, but unless it’s integrated into the overall architecture, it’s hard to get any real-time visibility into your data.”
On the surface, Druid looks like a standard MPP database, like a Vertica or Netezza, but the key difference is that it’s been architected from scratch to fit within a cloud context. On the other side, it behaves somewhat similarly to Hadoop with data sharding across many nodes, partitioning and parallel performance boosts, not to mention a similar fault tolerance mechanism that offers double-replication of data (whereas Hadoop offers triple). The secondary value prop is the in-memory component, which Driscoll says offers a 1,000x performance increase over traditional database approaches like Vertica, Netezza, Greenplum and even the much-heralded Dremel (not far removed from Impala).
When it comes to Dremel, for example, Driscoll says the differentiation is in the fact that Dremel and others are still disk-bound where read speeds are about 1,000x lower than when reading off DRAM. “In-memory databases have a massive performance advantage over traditional disk-backed databases, so they are a key component of Druid,” he said.
In addition to the aspects of Druid he mentioned, the concept of rolling restarts might be a worthwhile capability for some users who don’t want to have to restart the database with the addition of new code, since it’s possible to bring nodes down one at a time then back up again in a rolling fashion.
“We started to see a clear need in the Hadoop ecosystem for something that could be real-time and fast at scale,” said Driscoll. “Besides, we believed strongly in having an open source component since the era of licensed software is coming to a close—the future belongs to cloud-backed SaaS.” The Metamarkets CEO went on to tell us that the real-world customers they deal with don’t want to think about Hadoop, databases or the underlying stack – all they want is results. Thus, being able to deliver the entire stack with reliability built in as well as the complexity of Hadoop and on-site hardware removed is valuable. He claims that since they open-sourced Druid in mid-October there have been hundreds of GitHub downloads and over 20 forks to the database built.
All the open source interest in the world is useless without an actual use case, but when it comes to users of Druid, the company was quick to point to Netflix as proof of scale. Driscoll told us that the video giant got wind of the open source data store and was granted an early look at the architecture and tested the offering.
The point is that it is able to scale. Metamarkets looks at over a trillion events processed on its platform on a daily basis and between 10-20 billion events per day. He said that overall, in a market like online advertising, there are around 100 billion micro-transactions per day across the vendors who cater to this high-volume, high-speed and quickly growing segment. Therefore, it’s not surprising that some of the core innovation in big data technology is coming from the areas of online ad markets. Their message of usability and performance, however, can be easy to lose since their core customer base is so targeted and they don’t tend to pitch heavily to further verticals, even if they see applicability in emerging areas.
We talked briefly about how online advertising big data needs are driving innovation that can be carried over to other industries like insurance, healthcare, smart grids and other areas. His thought on this mesh was noteworthy, if not somewhat epic…
“What we’re witnessing in the world of digital advertising is the birth of a global digital nervous system. The same kind of wiring that helps it work can easily extend to other verticals…the tech emerging here will lead the next generation of tech for other industries.”
Related Articles
Six Super-Scale Hadoop Deployments
Cloudera CTO Reflects on Hadoop Underpinnings
Expedia Adds Notes to Big Data Symphony
May 17, 2022
- Satori Announces New Data Portal, Slack Integration, AWS Availability
- Splunk Releases State of Observability 2022 Report
- EDB Releases EDB BigAnimal DBaaS on AWS
- ChaosSearch-Unisphere Survey Finds Data Quality and Timeliness Issues Have Increased in Past 3 Years
- SUSE Announces Significant Advancements to NeuVector and Rancher
- Okera to Host AIRSIDE Live 2022 Data Event, May 25-26
- Next Generation of Tableau Cloud Brings Advanced Analytics to Business Users
- Prefect and Gradient Flow Report Reveals Rising Need for Workflow Orchestration
- Era Software Introduces EraStreams for Observability Data Management
May 16, 2022
- D2iQ Partners with GitLab Inc. to Accelerate the Deployment of Smart Cloud Native Applications
- Datadog Enhances Monitoring and Security for Kubernetes
- Timescale Announces OpenTelemetry Tracing Support for Promscale
May 13, 2022
- TigerGraph Delivers 600% ROI in Total Economic Impact Study
- Report: Almost Half of CMOs Claim Data Hinders Creativity
- Zenoss Announces Google Cloud as GalaxZ22 Platinum Sponsor
- Census Introduces Reverse ETL Observability Capabilities
May 12, 2022
Most Read Features
- d-Matrix Gets Funding to Build SRAM ‘Chiplets’ for AI Inference
- Five Ways Big Data Projects Can Go Wrong (And What You Can Do About Them)
- Google’s Massive New Language Model Can Explain Jokes
- The Future of Data Management: It’s Already Here
- Frictionless Shopping at the Enterprise Data Store
- Payment Fraud at Record Lows Thanks to Analytics and AI, Visa Says
- Finding the Data Access Governance Sweet Spot
- How to Stop Failing at Data
- AWS Charts a Multi-Pronged Path to IT Observability
- Fighting Harmful Bias in AI/ML with a Lifelong Approach to Ethics Training
- More Features…
Most Read News In Brief
- Anaconda Unveils PyScript, the ‘Minecraft for Software Development’
- Looker Founder Helps Create New Data Exploration Language, Malloy
- Why So Few Are Mastering the Data Economy
- Anaconda’s Commercial Fee Is Paying Off, CEO Says
- Google Cloud Launches New Postgres-Compatible Database, AlloyDB
- Starburst Nabs $250M for Open Analytics on Data Mesh
- TigerGraph Releases New Benchmark Report
- SalesForce Taps LLM for Programming Boost with CodeGen
- Data Integrity Firm Highlights High-Risk Wildfire Areas in Texas
- Data Visualization Platform Enso Emerges from Stealth with $16.5M
- More News In Brief…
Most Read This Just In
- BattleFin Announces a Collaboration with AWS Data Exchange
- MarkLogic Announces Data Hub Central
- IBM Report: Compromised Employee Accounts Led to Most Expensive Data Breaches Over Past Year
- MariaDB Survey Reveals COVID-19’s Impact on Cloud Adoption
- Grid Dynamics Unveils New ML-Based Price Optimization Starter Kit for Google Cloud Vertex AI
- Penn State Launches Master’s Degree in Spatial Data Science
- CData Software and HULFT Announce Interoperability Partnership to Break Down Data Silos
- Tableau Announces New Capabilities to Empower Developers
- Google Cloud Announces BigQuery Omni for Multi-Cloud Analytics
- DataRobot to Host Inaugural AI Experience Worldwide Conference
- More This Just In…
Sponsored Partner Content
-
Everyday AI, Extraordinary People
-
Dataiku Makes the Use of Data and AI an Everyday Behavior
-
Data Fabrics as the best path for Enterprise Data Integration
-
Dataiku connects data and doers through Everyday AI
-
Leaving Legacy ETL Behind
-
Streamline Lakehouse Analytics with Matillion and Databricks SQL
-
Close the Information Gap: How to Succeed at Analytics in the Cloud
-
Who wins the hybrid cloud?
Sponsored Whitepapers
Contributors
Featured Events
-
CDAO Insurance
May 24 @ 8:00 am - May 25 @ 5:00 pmNew York NY United States -
ISC 2022: The Premier Forum for HPC
May 29 - June 2 -
DMWF Global
June 23 - June 24London United Kingdom -
CDAO Government
September 13 @ 1:00 pm - September 14 @ 5:00 pm -
CDAO Fall
October 10 - October 12Boston MA United States