Taking the Pain Out of Buying and Selling Data
We’re well into big data’s second decade, and we’ve made a ton of progress on many fronts. We have cloud-based systems with infinite storage capacity, sophisticated machine learning software that improves by the month, and powerful clusters turbo-charged with GPUs. Increasingly, what differentiates big data outcomes is the quantity and quality of the data we use, which typically means going offsite for additional sources. So why does buying and selling data have to be so hard?
That’s the question that Narrative founder and CEO Nick Jordan found himself asking after going through the rigmarole of the data buying process with a company that bought and sold millions of dollars’ worth of data every year.
“I actually looked for a solution to my problem, because it was a giant pain in the ass on both sides, frankly,” Jordan says. “What I found at the time was a bunch of data brokers, a bunch of people who would sit between buyers and sellers and say ‘Don’t worry, well make it easy.’”
But instead of improving life for buyers and sellers of data, these data brokers mostly were out for themselves, Jordan says. The brokers marked up the cost of data, created opacity between the buy and sell sides, and generally did their best to put themselves in the driver’s seat as the digital middlemen – to the detriment of the actual buyers and the sellers.
“When somebody can basically sit in the middle and say, hey this stuff is so hard, we can make a business entirely out of almost arbitraging the underlying asset–to me, that just shows what an immature and inefficient market it was,” Jordan says.
It also presented a business opportunity, which Jordan is hoping to fill with Narrative, the data streaming platform that he founded five years ago. Like the data brokers, Narrative sits between the buyers and the sellers of data. But unlike typical data brokers, Narrative’s goal is to build transparency, automation, and trust on both sides of the data equation.
“The idea is to build a system that takes all the manual and inefficient bits of buying and selling data and make them more efficient through automation, standardization, and normalization,” Jordan says.
Narrative’s SaaS-based application provides a platform to connect buyers and sellers. On the buy side, it helps companies acquire and integrate second- and third-party data, typically for the purpose of AI or analytics. On the sell side, companies that license Narrative’s software have a mechanism for reaching multiple buyers in an orderly and streamlined fashion.
There’s a lot of work that goes into buying and using, on both sides of the equation, according to Jordan. There are all the usual questions about the format that the data takes (CSV, Parquet, JSON, etc.), the units of measurement (Imperial or metric, GPS or UTM) and the frequency at which it will be refreshed (monthly, daily, hourly, etc.).
Once data scientists or analysts have studied a sample of the outside data and decided that it will work for their particular activity, then data engineers are called in to build the ETL pipelines to move the data, which can often take months.
On top of the logistical questions, there are legalities that must be taken into account. Buyers and sellers both must take measures to assure that they’re not violating regulations for their particular geography. Finance teams typically gets involved to obtain usage data and make the payments. And if anything changes to the data or the contract, all the engineers, analysts, data scientists, lawyers, and finance folks get to drop whatever they’re doing and revisit the matter.
Anybody who says that buying data is easy “are clearly people who have never bought data,” Jordan says. “It takes a village to buy data, so the goal is to give tools to all of those personas to make all of their jobs easier, so it’s not the cluster that it is today.”
Companies that buy data through Narrative can build data pipelines that combine datasets from multiple sources into a single, unique stream of data, which can be pushed to a data lake, data warehouse, or specific application. The company has a range of sellers on its platform today, offering many types of consumer data, including demographic data, geo-location data, behavioral data, and device data, among others.
The New York City-based company is an AWS shop, and leverages a host of big data technologies, including Kinesis, Dynamo, and Spark, to facilitate the movement of large amounts of data. Besides the big data tech, the real advantage of Narrative is the taming of all the other questions, including the management of the relationship with data providers.
One Narrative customer is a hotel chain that needs to know the wine preferences of its clients, so that they can find their preferred variety waiting when they get to their room, Jordan says. The company currently has around 40 data partners selling data, and many more clients buying data, in some cases into the millions of dollars.
Narrative recently added the credit bureau TransUnion to the sell-side of the equation. TransUnion has a long history of providing high-quality data about nearly every American consumer, and that data is now on tap in the Narrative platform, which is targeted at Fortune 2000 firms.
Narrative, of course, isn’t the only company striving to simplify the data buying and selling process. Data marketplaces are popping up all over the place, which is a good thing. The more standardization that comes to this market, the better off we will all be.
July 23, 2021
- Observable Introduces Data Visualization Stack for the Enterprise
- Collibra Named a Leader in Data Governance Solutions by Independent Research Firm
- Mindtech Raises $3.25M to Accelerate Growth of Synthetic Data Training Platform for AI Vision Systems
July 22, 2021
- LigaData Now Certified on Cloudera Data Platform
- Adobe Launches Adobe Analytics for Higher Education to Advance Digital Literacy
- Anaconda Releases 2021 State of Data Science Survey Results
- Alation Supports Next Generation of Data Enthusiasts, Provides Free Software and Training
- NASA Expands Access to Planet Data to All US Federal Civilian Agencies
- Google Cloud Announces Healthcare Data Engine to Enable Interoperability in Healthcare
July 21, 2021
- Deloitte, US Chamber of Commerce Report Highlights How Public Policy Can Enable Trustworthy AI
- WHO Issues First Global Report on AI in Health
- New Data Science Platform Speeds Up Python Queries
- Teradata Joins TM Forum to Support the Cloud Journeys of Global Communications Services Providers
- Confluent Named Google Cloud Technology Partner of the Year for Third Year in a Row
- Lucata Raises $11.9M Series B to Introduce Next-Gen Computing Platform
- Securonix Announces ‘Bring Your Own Snowflake’ Program to Power Security Data Lake for Snowflake Customers
- Rensselaer Team Aims to Pave Way for Robust AI in Medical Imaging
- Dremio Launches SQL Lakehouse Service to Accelerate BI and Analytics
- Spectra Logic, StorMagic Announce Active Archive Repository for Video Surveillance
- Quobyte Releases Hadoop Native Driver to Unlock the Power of Analytics, ML, Streaming
- Big Data File Formats Demystified
- Who’s Winning In the $17B AIOps and Observability Market
- Tuplex Gives Python UDFs a Performance Boost
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- Presto the Future of Open Data Analytics, Foundation Says
- The Data Mesh Emerges In Pursuit of Data Harmony
- Why Data Scientists and ML Engineers Shouldn’t Worry About the Rise of AutoML
- Composite AI: What Is It, and Why You Need It
- What’s Holding Us Back Now? ‘It’s the Data, Stupid’
- The Multiple Faces of Digital Twins
- More Features…
- Hiring, Pay for Data Science and Analytics Pros Picks Up Steam
- Starburst Backs Data Mesh Architecture
- Confluent Raises More Than $800M in IPO
- Off the Couch: Database Maker Seeks $160 Million In IPO
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- Let It Go: The Financial Benefits of Data Deletion
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- Qumulo Debuts QaaS, a File Lake on the Azure Cloud
- Teradata: We’ve Outsourced Some Hardware Support for Years
- Databricks Edges Closer to IPO with $1B Round
- More News In Brief…
- Splunk Launches New Security Cloud
- Red Hat Expands Workload Possibilities Across Hybrid Cloud with Latest Version of OpenShift
- JetBrains Announces Datalore Enterprise, Enabling Data Science Teams to Collaborate On-Prem
- Alluxio v2.6 Release Brings Performance, Ease of Use Improvements to AI/ML Workloads
- DDN Selected by Bytesnet to Provide ‘Pay-Per-Use’ Storage for Data-Intensive Organizations
- Vertica Announces Vertica 11, Delivering on Vision of Unified Analytics
- TigerGraph Expands Partner Ecosystem to Bring the Power of Graph to More Businesses
- MariaDB Collaborates with AWS to Deliver SkySQL on AWS
- diwo Awarded 6 New Patents to Deliver Advanced Decision Intelligence to Enterprises
- Clarabridge Debuts Clara Intelligent Search Assistant for Instant CX Insights
- More This Just In…
Sponsored Partner Content
August 25 @ 12:00 pm - 5:00 pm
November 29 - December 3
December 6 - December 10San Diego CA United States