Raising a Platform to Meet New Verticals
Despite wide technological and industry chasms, there is a growing sense of universality when it comes to the general needs big data technologies are fulfilling. While companies in this ever-growing space tend to have their eyes on a key set of verticals, it’s often not a stretch to extend the usefulness of their offerings outside of their core markets.
A good example of this universality is embodied by Metamarkets. The company’s claim is that they can leverage the three keywords of today’s big data craze (Hadoop, cloud and in-memory) to power the events-based needs of big media, social and gaming companies at web scale.
Metamarkets brings a purpose-built analytics engine to the table, which is delivered as a cloud-based service that harnesses Amazon’s platform, along with its Elastic Map Reduce, and S3. According to the company’s VP of marketing, Ken Chestnut, making us of the cloud enables his company to divert resources into the company’s in core technology (Druid comes to mind) instead of wasting cycles “reinventing the wheel” when it comes to cloud storage and computing infrastructure.
It could be a new day for once highly specific vertical-centered companies like Metamarkets that have purpose-built big data products and platforms that could find new appeal in verticals that might have once been completely out of reach. Chestnut told us this is due, in part, to a convergence of trends and needs that platforms like theirs, which were originally built to handle big web data at massive scale, were (unwittingly) made to serve.
While Chestnut wouldn’t name potential new market opportunities specifically, other than noting that they are still evaluating the full needs of other industries, he pointed to three threads that tie core industries to the outside world of enterprise big data needs, including:
- Ever-increasing volumes and sources of event-based data
- Current difficulties transforming that data into actionable insight using existing solutions
- Realization that tremendous competitive edge can be gained by shortening time to insight
Chestnut says these these challenges have become more acute for several reasons, the most important of which is that with more events and transactions moving online, there is greater opportunity to measure and quantify results in ways that are non-intrusive to end users. Accordingly, he says that companies have started “instrumenting” all aspects of their operations as a result.
He continued, noting that traditional systems were not designed to handle the volume, velocity, and variety of data that these companies are capturing. “The consequence is that data is being generated faster than it can be processed and consumed resulting in a longer time to insight (the lag time between when data is captured and when it is available for analysis).”
Chestnut noted that customers in the company’s core verticals, which include bigtime web publishing, social media and and gaming—all major event-based data-dependent verticals—all wanted the ability “slice and dice”, roll-up, drill-down on events-based data (click streams, ad impressions, user actions, etc.) by time, region, gender, and other considerations.
He said that at first, they investigated a number of relational- and NoSQL-based alternatives, but none of them achieved the speed and scale required. As a result, the company developed their own distributed, in-memory, OLAP data store, Druid.
As Chestnut told us, “To overcome performance issues typically associated with scanning tables, Druid stores data in memory. The traditional limitation with this approach, however, is that memory is limited. Therefore, we distribute data over multiple machines and parallelize queries to speed processing and handle increasing data volumes. Our customers are able to scan, filter, and aggregate billions of rows of data at ‘human time’ with the ability to trade-off performance vs cost.”
On that note, the concept of handling requests in “human time” is important to the company and plays into their strategy around Hadoop. Chestnut says that Hadoop is very complimentary to Metamarkets (and vice versa). “While Hadoop has tremendous advantages processing data at scale, it does not respond to ad-hoc queries in human time. This is where Metamarkets shines. We use Hadoop to pre-process data and prepare it for fast queries in Druid. When users log into Metamarkets, they can explore data in real-time without limits in terms of navigation or speed.”
September 30, 2016
September 29, 2016
- South Big Data Hub Announces Awards That Apply Data Science to Regional Challenges
- Hortonworks Announces Updates to Azure HDInsight Hadoop Cloud Offering
- Striim Collaborates With Google Cloud Platform
- MicroStrategy to Showcase Latest Trends in Analytics and Data Visualizations at October 2016 Symposium Series
- Splunk Expands Adaptive Response Initiative to Strengthen Enterprise Security
- Tegile Systems to Run Splunk Analytics on IntelliFlash
September 28, 2016
- New Features Introduced for Confluent Enterprise
- Basho Unveils Latest Versions of Riak TS and Riak KV
- Cask Partners With Tableau to Deliver Rapid Insights From Big Data
- Cloudera Announces New Technology Enhancements to Core Platform
- Continuum Analytics and IBM Partner to Advance Open Source Analytics for the Enterprise
- IBM Unveils Project DataWorks
- NSF Announces $10M in “Big Data Spokes” Awards
- SAP to Invest $2 Billion in IoT
- Podium Data Receives $9.5M in Financing
- Cloudera Reveals 2016 Data Impact Award Winners
- Cloudera Approves First Grant Applications for Precision Medicine Initiative
- Hortonworks to Showcase Latest Offerings at Strata + Hadoop World
Most Read Features
- 9 Must-Have Skills to Land Top Big Data Jobs in 2015
- Which Type of SSD is Best: SATA, SAS, or PCIe?
- Spark Streaming: What Is It and Who’s Using It?
- Solr or Elasticsearch–That Is the Question
- Yahoo’s New Pulsar: A Kafka Competitor?
- Python Eats Into R as SAS Dominance Fades
- 5 Factors Driving the Graph Database Explosion
- Workforce Analytics: How Big Data Is Shaping the Labor Pool
- How Uber Uses Spark and Hadoop to Optimize Customer Experience
- Apache Spark: 3 Real-World Use Cases
- More Features…
Most Read News In Brief
- AWS Redshift Feels the Heat
- Why Gartner Dropped Big Data Off the Hype Curve
- Six Big Name Schools with Big Data Programs
- Veritas Disses Dell/EMC As It Preps Big Info Management Push
- MIT Programmers Attack Big Data Memory Gap
- Altiscale Deal Would Boost SAP Hadoop Offerings
- ‘Smart Machines’ Top the Hype Cycle, Gartner Says
- Huawei, Startup Collaborate on Big Data Object Storage
- U.S. Visa Program Would Scan Social Media Data
- SAP Debuts Free HANA Express Edition
- More News In Brief…
Most Read This Just In
- Datanami Reveals Winners of Inaugural Readers’ and Editors’ Choice Awards
- New Report Says Data Lakes Market to be Worth $8.81 Billion by 2021
- Continuum Analytics Teams Up With Intel for Python Distribution Powered by Anaconda
- Informatica to Expand Data Lake Management Solution
- Teradata Introduces Borderless Analytics
- Cask Releases Preview of First Unified Integration Platform for Big Data
- SAP Launches BW/4HANA
- Huawei and Alluxio Jointly Release Big Data Storage Acceleration Solution
- Elastic Acquires Prelert
- Munich Re Relying on SAS Analytics and HDP for Big Data Initiative
- More This Just In…
October 19San Francisco CA United States
October 23 - October 27New York United States
November 7 @ 8:00 am - November 9 @ 5:00 pmSan Francisco CA United States