Raising a Platform to Meet New Verticals
Despite wide technological and industry chasms, there is a growing sense of universality when it comes to the general needs big data technologies are fulfilling. While companies in this ever-growing space tend to have their eyes on a key set of verticals, it’s often not a stretch to extend the usefulness of their offerings outside of their core markets.
A good example of this universality is embodied by Metamarkets. The company’s claim is that they can leverage the three keywords of today’s big data craze (Hadoop, cloud and in-memory) to power the events-based needs of big media, social and gaming companies at web scale.
Metamarkets brings a purpose-built analytics engine to the table, which is delivered as a cloud-based service that harnesses Amazon’s platform, along with its Elastic Map Reduce, and S3. According to the company’s VP of marketing, Ken Chestnut, making us of the cloud enables his company to divert resources into the company’s in core technology (Druid comes to mind) instead of wasting cycles “reinventing the wheel” when it comes to cloud storage and computing infrastructure.
It could be a new day for once highly specific vertical-centered companies like Metamarkets that have purpose-built big data products and platforms that could find new appeal in verticals that might have once been completely out of reach. Chestnut told us this is due, in part, to a convergence of trends and needs that platforms like theirs, which were originally built to handle big web data at massive scale, were (unwittingly) made to serve.
While Chestnut wouldn’t name potential new market opportunities specifically, other than noting that they are still evaluating the full needs of other industries, he pointed to three threads that tie core industries to the outside world of enterprise big data needs, including:
- Ever-increasing volumes and sources of event-based data
- Current difficulties transforming that data into actionable insight using existing solutions
- Realization that tremendous competitive edge can be gained by shortening time to insight
Chestnut says these these challenges have become more acute for several reasons, the most important of which is that with more events and transactions moving online, there is greater opportunity to measure and quantify results in ways that are non-intrusive to end users. Accordingly, he says that companies have started “instrumenting” all aspects of their operations as a result.
He continued, noting that traditional systems were not designed to handle the volume, velocity, and variety of data that these companies are capturing. “The consequence is that data is being generated faster than it can be processed and consumed resulting in a longer time to insight (the lag time between when data is captured and when it is available for analysis).”
Chestnut noted that customers in the company’s core verticals, which include bigtime web publishing, social media and and gaming—all major event-based data-dependent verticals—all wanted the ability “slice and dice”, roll-up, drill-down on events-based data (click streams, ad impressions, user actions, etc.) by time, region, gender, and other considerations.
He said that at first, they investigated a number of relational- and NoSQL-based alternatives, but none of them achieved the speed and scale required. As a result, the company developed their own distributed, in-memory, OLAP data store, Druid.
As Chestnut told us, “To overcome performance issues typically associated with scanning tables, Druid stores data in memory. The traditional limitation with this approach, however, is that memory is limited. Therefore, we distribute data over multiple machines and parallelize queries to speed processing and handle increasing data volumes. Our customers are able to scan, filter, and aggregate billions of rows of data at ‘human time’ with the ability to trade-off performance vs cost.”
On that note, the concept of handling requests in “human time” is important to the company and plays into their strategy around Hadoop. Chestnut says that Hadoop is very complimentary to Metamarkets (and vice versa). “While Hadoop has tremendous advantages processing data at scale, it does not respond to ad-hoc queries in human time. This is where Metamarkets shines. We use Hadoop to pre-process data and prepare it for fast queries in Druid. When users log into Metamarkets, they can explore data in real-time without limits in terms of navigation or speed.”
July 24, 2014
- Datawatch Announces Third Quarter Financial Results
- Enlightiks Develops Analytics Platform with Tableau at its Core
July 23, 2014
- Trifacta Appoints New CEO
- InfiniDB Offering New Accelerator Program
- Australian Genome Research Facility Selects Brocade to Handle Big Data Growth
July 22, 2014
- MapR Partners with TCS
- InfiniDB Expands Global Partner Program
- Apache Tez Becomes Top-Level Project
- DDN Joins Global Alliance for Genomics and Health
- Registration Open for Rock Stars of Big Data Analytics Conference
- Teradata Announces Two New Acquisitions
- Corvil Introduces New Streaming Analytics Platform
- Logi Analytics and Actian Partner
- Dresner Publishes Advanced and Predictive Analytics Market Study
July 21, 2014
- Bundamedik to Utilize IBM Big Data & Analytics Technology
- Actuate Introduces BIRT Analytics 4.4
- Datawatch to Present at TDWI World Conference and BI Executive Summit
July 18, 2014
July 17, 2014
July 16, 2014
Most Read Features
- Google Re-Imagines MapReduce, Launches DataFlow
- Inside Sibyl, Google’s Massively Parallel Machine Learning Platform
- How T-Mobile Got More from Hadoop
- Apache Spark: 3 Real-World Use Cases
- Where Does Spark Go From Here?
- Can You Trust Your Algorithms?
- How Hadoop is Remaking Travel and Expense Reporting at Concur
- When to Hadoop, and When Not To
- Databricks Takes Apache Spark to the Cloud, Nabs $33M
- What Can GPFS on Hadoop Do For You?
- More Features…
Most Read News In Brief
- Six Big Name Schools with Big Data Programs
- Hadoop on a Raspberry Pi
- Big Data Forcing Update of SQL Standard
- GraphLabs Wises Up Machine Learning Platform
- See Spark Run on NoSQL, DataStax Says
- MapR Announces $110M Investment Led by Google
- Hadoop and NoSQL Now Data Warehouse-Worthy: Gartner
- Navy Launches Big Data ‘Ecosystem’ Effort
- Oracle Aims to Break Big Data Silos with SQL
- Cloudera, Dell, Intel Target ‘Big Data Ecosystem’
- More News In Brief…
Most Read This Just In
- Teradata Introduces Aster R
- SAP Introduces New Big Data Initiatives
- Accenture and Hortonworks Join Forces to Help Businesses Manage Big Data
- Guavus Unveils Reflex 2.0
- IBM Announces $3 Billion Research Initiative
- Cloudera, Databricks, IBM, Intel, and MapR Collaborate
- Guavus Announces Platform Update
- Databricks to Deliver Spark Distribution Offering for SAP HANA Platform
- Alteryx and Databricks to Lead Development of SparkR for Hadoop
- T-Systems to Hold First Big Data Challenge
- More This Just In…
July 20 - 24Portland OR United States
July 28 - 30San Francisco CA United States
October 1 - 2Heidelberg Germany
October 8 - 9Royal Victoria Dock London United Kingdom
October 15 - 17New York United States