Raising a Platform to Meet New Verticals
Despite wide technological and industry chasms, there is a growing sense of universality when it comes to the general needs big data technologies are fulfilling. While companies in this ever-growing space tend to have their eyes on a key set of verticals, it’s often not a stretch to extend the usefulness of their offerings outside of their core markets.
A good example of this universality is embodied by Metamarkets. The company’s claim is that they can leverage the three keywords of today’s big data craze (Hadoop, cloud and in-memory) to power the events-based needs of big media, social and gaming companies at web scale.
Metamarkets brings a purpose-built analytics engine to the table, which is delivered as a cloud-based service that harnesses Amazon’s platform, along with its Elastic Map Reduce, and S3. According to the company’s VP of marketing, Ken Chestnut, making us of the cloud enables his company to divert resources into the company’s in core technology (Druid comes to mind) instead of wasting cycles “reinventing the wheel” when it comes to cloud storage and computing infrastructure.
It could be a new day for once highly specific vertical-centered companies like Metamarkets that have purpose-built big data products and platforms that could find new appeal in verticals that might have once been completely out of reach. Chestnut told us this is due, in part, to a convergence of trends and needs that platforms like theirs, which were originally built to handle big web data at massive scale, were (unwittingly) made to serve.
While Chestnut wouldn’t name potential new market opportunities specifically, other than noting that they are still evaluating the full needs of other industries, he pointed to three threads that tie core industries to the outside world of enterprise big data needs, including:
- Ever-increasing volumes and sources of event-based data
- Current difficulties transforming that data into actionable insight using existing solutions
- Realization that tremendous competitive edge can be gained by shortening time to insight
Chestnut says these these challenges have become more acute for several reasons, the most important of which is that with more events and transactions moving online, there is greater opportunity to measure and quantify results in ways that are non-intrusive to end users. Accordingly, he says that companies have started “instrumenting” all aspects of their operations as a result.
He continued, noting that traditional systems were not designed to handle the volume, velocity, and variety of data that these companies are capturing. “The consequence is that data is being generated faster than it can be processed and consumed resulting in a longer time to insight (the lag time between when data is captured and when it is available for analysis).”
Chestnut noted that customers in the company’s core verticals, which include bigtime web publishing, social media and and gaming—all major event-based data-dependent verticals—all wanted the ability “slice and dice”, roll-up, drill-down on events-based data (click streams, ad impressions, user actions, etc.) by time, region, gender, and other considerations.
He said that at first, they investigated a number of relational- and NoSQL-based alternatives, but none of them achieved the speed and scale required. As a result, the company developed their own distributed, in-memory, OLAP data store, Druid.
As Chestnut told us, “To overcome performance issues typically associated with scanning tables, Druid stores data in memory. The traditional limitation with this approach, however, is that memory is limited. Therefore, we distribute data over multiple machines and parallelize queries to speed processing and handle increasing data volumes. Our customers are able to scan, filter, and aggregate billions of rows of data at ‘human time’ with the ability to trade-off performance vs cost.”
On that note, the concept of handling requests in “human time” is important to the company and plays into their strategy around Hadoop. Chestnut says that Hadoop is very complimentary to Metamarkets (and vice versa). “While Hadoop has tremendous advantages processing data at scale, it does not respond to ad-hoc queries in human time. This is where Metamarkets shines. We use Hadoop to pre-process data and prepare it for fast queries in Druid. When users log into Metamarkets, they can explore data in real-time without limits in terms of navigation or speed.”
September 21, 2017
- Alation Delivers Governance for Insight in Data Lakes, Both On-premises and in the Cloud
- Talend Introduces New Data Governance & Compliance Solution
- Anaconda to Present at Strata Data Conference
- In-Memory Computing Summit North America 2017 Announces Breakout Session Schedule
- VoltDB Accelerates Access to Translytical Database with Enterprise Lab Program
- Optalysys Raises $4 Million to Break Bottlenecks in Genomic Research and Big Data Analysis
- Vexata Launches with $54M in Venture Funding
September 20, 2017
- Qlik Named a Leader in Independent Enterprise BI Platforms Report
- Next Pathway Launches Cornerstone Version 3.0
- Pepperdata Launches Strategic Partner Program to Serve Systems Integration Service Providers
- GigaSpaces Integrates InsightEdge Platform with BigDL for Scalable Deep Learning Innovation
- Dell EMC Teams with Splunk to Deliver Packaged Solutions
- Rambus Announces First Functional Silicon of Server DIMM Buffer Chipset for Next-generation DDR5
- Arcadia Data Simplifies Big Data with Machine-Assisted Insights for Business Analysts
September 19, 2017
- TIBCO Connected Intelligence Cloud Equips Companies for Digital Transformation
- Actian Vector in Hadoop Turbocharges Spark Performance
- Kyvos Insights to Showcase Kyvos 4.0 at Strata Data Conference
- Syncsort Announces Trillium Quality for Big Data
- Mesosphere Joins Dell EMC’s Reseller Program
- Machine Learning Makes SAP S/4HANA More Intelligent
Most Read Features
- Forrester Reshuffles the Deck on BI and Analytics Tools
- Taking the Data Scientist Out of Data Science
- Machine Learning: Are You Ready? A 7-Part Checklist
- 9 Must-Have Skills to Land Top Big Data Jobs in 2015
- Which Type of SSD is Best: SATA, SAS, or PCIe?
- Kafka Gets Streaming SQL Engine, KSQL
- Machine Learning, Deep Learning, and AI: What’s the Difference?
- Spark Streaming: What Is It and Who’s Using It?
- The Data Science Behind Dollar Shave Club
- Apple Puts a ‘Neural Engine’ Inside the iPhone
- More Features…
Most Read News In Brief
- How AI Fares in Gartner’s Latest Hype Cycle
- Baidu’s AI Algorithm Parses Video
- Anaconda Taps Containers to Simplify Data Science Deployments
- ‘Database Learning’ Aims to Speed Queries
- RDBMS Remains Popular As Data Sources Grow
- Analytics Spending Up, Trust in Data Down
- Microsoft Surges in Gartner Quadrant with Power BI
- Tableau Automates K-Means Clustering in V10 Refresh
- Databricks, Flush With Cash, Steers Spark at AI
- Alteryx Tools Aims to Speed Model Deployment
- More News In Brief…
Most Read This Just In
- UC Irvine Introduces Machine and Deep Learning Programs
- Graph Databases Lie at the Heart of $7 Trillion Self-Driving Car Opportunity
- Arrow Electronics Enables Sensor-to-Cloud-to-Analytics IoT Platform
- Report: SAS Ranks No. 1 in Advanced, Predictive Analytics Market Share
- MapR Receives $56M Equity Raise from Existing Investors
- Snowflake Introduces Cloud Data Warehouse Built for Financial Services
- Forrester Names TIBCO Leader in Streaming Analytics
- Instaclustr Launches Managed Open Source-as-a-Service Platform
- Dataiku Raises $28M Series B to Help Democratize Data Science, Analytics
- Unisys Predictive Freight Solution Wins Global ICMG Award
- More This Just In…
September 25 - September 28
September 25 - September 28New York United States
September 26Dallas TX United States
October 31 - November 2Santa Clara CA United States
December 11 - December 13Boston MA United States