Graphing Intent: Inside a Consumer Tracking Database
When the CTO for marketing analytics firm Qualia set out to build a new system consumer behavior tracking system, he knew the database selection would be critical. Little did he know that Qualia would end up pushing the limits of the graph database chosen to power the new system.
Qualia is a New York City ad-tech firm that works with advertising agencies and major brands to help identify “intent signals” captured across people’s smart phones, tablets, and PCs. The company collects data on more than 90% of American households, and uses that data to essentially give clients a heads-up when a consumers is likely to be is receptive to a pitch.
Timeliness and accuracy are key to what Qualia does. For example, if a consumer loaded a webpage about cars, that would be considered a “low quality” signal, and that engagement would go into a bucket with 300 million other “auto intenders,” explains Qualia CTO Niels Meersschaert. “It’s not really that meaningful,” he says. “Otherwise we’d sell 300 million cars.”
However, if that search occurs on a smart phone while the consumer is standing in a Ford dealership, that search suddenly has greater meaning. Qualia’s goal is to sift through all that real-time signal information, sort out the low quality signals from the higher quality ones, and decide which consumers are
As Meersschaert explains, it’s critical to differentiate who’s holding which device, where, and when. “By knowing the specific device that that signal occurred on, and knowing that multiple devices are associated with the same consumer or household in a sense, we are able to build a smarter model about what the actual intent is of that consumer at any moment in time,” he says.
The company relies on a steady stream of data collected via cookies and other tracking mechanism used on the Web. The data is largely proprietary, and anonymized to prevent it from being abused. Managing the incoming data stream is one issue that Qualia has to deal with, but doing something useful with the data is much more important.
Qualia’s data naturally forms a tree-like structure, with parent-child relationships. Each household contains one or more people, and each of those people use one or more computer devices, each of which is tracked with its own identification number.
Describing that sort of data structure in a relational database would be possible, Meersschaert says, but it would be far from ideal.
“You’d end up making all these duplicate data entries, and your data would become far larger than it needs to be,” he tells Datanami. “To get access to any piece of data, you’d either have to have a very wide table, which will be very expensive to update and change, or you’re going to have to have a massive number of joins, which is very slow from an operational perspective.”
Meersschaert realized very early on that a graph database would be the right approach. He considered various open source graph databases, including one called Titan. While Titan had its plusses, the CTO found its traits lacking for Qualia’s specific use case.
“The problem we found with doing something with Titan with an HBase back is the way HBase works,” he says. “There’s a single key that’s on each table, and so every single element that you might query against would be a new table. So you basically had to know all the indexes before you created the tables.”
That would have limited Titan’s usefulness, he says. “It didn’t give you the ability to change [the index] after the fact. It ended up having a tremendous amount of duplication of data when you do a columnar store, because you don’t have the concept of oh here’s a node, and all the things that describe that node are held together.”
Qualia eventually settled on Neo4j, a property graph database developed by Neo Technology. Meersschaert says the way data is stored in nodes and edges in Neo4j is a very good match to the tree-like structure of Qualia’s customer intent data.
Today Qualia’s graph database contains 3 billion nodes and 9 billion edges. This data encompasses 1.2TB of data sitting across three nodes in a cluster of X86 servers. The data is sharded manually, since Neo Technology doesn’t support automatic sharding yet.
“We’re actually one of the largest installations of a graph database globally, which is kind of eye opening to use because we were just making this stuff work,” Meersschaert says.
It’s true that some databases are much larger than 1.2TB. But when you consider how storing the equivalent data in a relational structure would balloon the amount of data by a factor of 10, and slow queries down to a crawl, then you realize how a graph database is such a good fit for this particular use case, he says.
“A graph database allows you to be far more efficient in terms of traversing from one point to another,” Meersschaert says. A relational database could do the work at the end of the day, but “it doesn’t allow you to traverse from one point to another point along a path. This is where a graph-based database makes sense.”
It’s all about using the right tool for the job, Meersschaert says, adding that MongoDB, Hadoop, BigQuery, and Spark all play a role at Qualia. “We use the technology that’s appropriate to the task at hand,” he says.
These other systems work with Neo to surface the right data in the right format, according to a case study posted to the Neo website. Qualia relies on a Ruby-developed app called Spirograph to communicate with Neo4j by either inserting data into or pulling data from the graph in Ruby, which is then processed in Hadoop, Spark and BigQuery. The company also relies on Cerebro to convert user coordinates into a commercial location, which is stored in MongoDB.
August 23, 2019
- Cloudian Announces General Availability of Object Storage Solution for VMware Cloud Provider Platform
- Igneous DataDiscover Now Available on AWS Marketplace
- Robin.io and Vexata Join Forces to Help Enterprises Modernize their Data Infrastructure
August 22, 2019
- VMware Signs Definitive Agreement to Acquire Pivotal Software
- VMware Enters Definitive Agreement to Acquire Carbon Black
- AWS Announces General Availability of Amazon Forecast
- Splunk to Acquire SignalFx
- SAP Positioned as a Leader in 2019 Gartner “Magic Quadrant for Data Integration Tools”
- Spark + AI Summit Returns to Amsterdam as the Largest Data and Machine Learning Conference in Europe
- Deep Learning Enables Scientists to Identify Cancer Cells in Blood in Milliseconds
August 21, 2019
- Paxata Recognized in the Gartner Peer Insights ‘Voice of the Customer’: Data Preparation Tools
- Syncsort Acquires SQData
- Okera Adds Visual, Natural Language Policy Creation Workflow to Data Lake Security and Governance Platform
- Databricks Offers Automation Throughout the End-to-End Data and Machine Learning Lifecycle
- Capacity Unveils AI Knowledge Sharing Platform to Boost Workplace Productivity
- ArangoDB Boosts Multi-Model Database Scalability Across Distributed Environments with Release of ArangoDB 3.5
- Siren Extends Scope of Platform with AI Functionality
August 20, 2019
- H2O.ai Raises $72.5M Led by Goldman Sachs and Ping An to Democratize AI
- Tuxera and Microsoft Strengthen Collaboration Through New exFAT Licensing
- Archive Document Data Storage Acquired by OASIS Group
Most Read Features
- Is Python Strangling R to Death?
- Re-Imagining Big Data in a Post-Hadoop World
- Big Data File Formats Demystified
- What HPE Sees in MapR Technologies
- Is Hadoop Officially Dead?
- 10 Big Data Trends to Watch in 2019
- Why Knowledge Graphs Are Foundational to Artificial Intelligence
- How to Build a Better Machine Learning Pipeline
- Big Data Is Still Hard. Here’s Why
- Data Catalogs Seen as Difference Makers in Big Data
- More Features…
Most Read News In Brief
- HPE Acquires MapR
- LinkedIn Data Helps to Create the First-Ever Global Map of Labor Flow
- MapR Says It’s Close to Deal to Sell Company
- Corporate Culture Continues to Stymie Data Insights
- War Unfolding for Control of Elasticsearch
- AWS Debuts PartiQL for Query Agnosticism
- Gartner Sees AI Democratized in Latest ‘Hype Cycle’
- California’s New Data Privacy Law Takes Effect in 2020
- LinkedIn Unleashes ‘Nearline’ Data Streaming
- Microsoft Azure Looks to Secure ‘Data Estates’
- More News In Brief…
Most Read This Just In
- SAS Fulfills Pledge to Support HBCUs with Software and Partnerships
- New Graph Database Performance Benchmark Confirms Graph Databases are Ready for Solving Real-World Business Intelligence, Data Challenges
- Cray ARM-based System ‘Ookami’ to Serve as Testbed for Computational Studies at Stony Brook
- Illumina to Share their Data Virtualization Journey at Gartner Catalyst Conference
- Report: SAS Sees 105% Growth in AI Revenue
- Accenture to Acquire Analytics8, Australian Analytics and Data Specialists
- MLOps NYC19 Conference to Promote the Standardization of ML Operations
- USC Marshall Convenes Workshop on Fintech and Big Data
- SnapLogic Delivers AI-powered Pipeline Recommendations and Azure Databricks Support with Latest Platform Release
- DL4Sci Workshop puts new Data Analysis Tools into Scientists’ Hands
- More This Just In…
September 11 - September 12New York NY United States
September 23 - September 26New York United States
October 20 - October 22Charlotte NC United States
October 23 - October 24Berlin Germany