
Graphing Intent: Inside a Consumer Tracking Database

When the CTO for marketing analytics firm Qualia set out to build a new system consumer behavior tracking system, he knew the database selection would be critical. Little did he know that Qualia would end up pushing the limits of the graph database chosen to power the new system.
Qualia is a New York City ad-tech firm that works with advertising agencies and major brands to help identify “intent signals” captured across people’s smart phones, tablets, and PCs. The company collects data on more than 90% of American households, and uses that data to essentially give clients a heads-up when a consumers is likely to be is receptive to a pitch.
Timeliness and accuracy are key to what Qualia does. For example, if a consumer loaded a webpage about cars, that would be considered a “low quality” signal, and that engagement would go into a bucket with 300 million other “auto intenders,” explains Qualia CTO Niels Meersschaert. “It’s not really that meaningful,” he says. “Otherwise we’d sell 300 million cars.”
However, if that search occurs on a smart phone while the consumer is standing in a Ford dealership, that search suddenly has greater meaning. Qualia’s goal is to sift through all that real-time signal information, sort out the low quality signals from the higher quality ones, and decide which consumers are
As Meersschaert explains, it’s critical to differentiate who’s holding which device, where, and when. “By knowing the specific device that that signal occurred on, and knowing that multiple devices are associated with the same consumer or household in a sense, we are able to build a smarter model about what the actual intent is of that consumer at any moment in time,” he says.
The company relies on a steady stream of data collected via cookies and other tracking mechanism used on the Web. The data is largely proprietary, and anonymized to prevent it from being abused. Managing the incoming data stream is one issue that Qualia has to deal with, but doing something useful with the data is much more important.
Qualia’s data naturally forms a tree-like structure, with parent-child relationships. Each household contains one or more people, and each of those people use one or more computer devices, each of which is tracked with its own identification number.
Describing that sort of data structure in a relational database would be possible, Meersschaert says, but it would be far from ideal.
“You’d end up making all these duplicate data entries, and your data would become far larger than it needs to be,” he tells Datanami. “To get access to any piece of data, you’d either have to have a very wide table, which will be very expensive to update and change, or you’re going to have to have a massive number of joins, which is very slow from an operational perspective.”
Meersschaert realized very early on that a graph database would be the right approach. He considered various open source graph databases, including one called Titan. While Titan had its plusses, the CTO found its traits lacking for Qualia’s specific use case.

Neils Meersschaert is CTO at Neo4j user Qualia
“The problem we found with doing something with Titan with an HBase back is the way HBase works,” he says. “There’s a single key that’s on each table, and so every single element that you might query against would be a new table. So you basically had to know all the indexes before you created the tables.”
That would have limited Titan’s usefulness, he says. “It didn’t give you the ability to change [the index] after the fact. It ended up having a tremendous amount of duplication of data when you do a columnar store, because you don’t have the concept of oh here’s a node, and all the things that describe that node are held together.”
Qualia eventually settled on Neo4j, a property graph database developed by Neo Technology. Meersschaert says the way data is stored in nodes and edges in Neo4j is a very good match to the tree-like structure of Qualia’s customer intent data.
Today Qualia’s graph database contains 3 billion nodes and 9 billion edges. This data encompasses 1.2TB of data sitting across three nodes in a cluster of X86 servers. The data is sharded manually, since Neo Technology doesn’t support automatic sharding yet.
“We’re actually one of the largest installations of a graph database globally, which is kind of eye opening to use because we were just making this stuff work,” Meersschaert says.
It’s true that some databases are much larger than 1.2TB. But when you consider how storing the equivalent data in a relational structure would balloon the amount of data by a factor of 10, and slow queries down to a crawl, then you realize how a graph database is such a good fit for this particular use case, he says.
“A graph database allows you to be far more efficient in terms of traversing from one point to another,” Meersschaert says. A relational database could do the work at the end of the day, but “it doesn’t allow you to traverse from one point to another point along a path. This is where a graph-based database makes sense.”
It’s all about using the right tool for the job, Meersschaert says, adding that MongoDB, Hadoop, BigQuery, and Spark all play a role at Qualia. “We use the technology that’s appropriate to the task at hand,” he says.
These other systems work with Neo to surface the right data in the right format, according to a case study posted to the Neo website. Qualia relies on a Ruby-developed app called Spirograph to communicate with Neo4j by either inserting data into or pulling data from the graph in Ruby, which is then processed in Hadoop, Spark and BigQuery. The company also relies on Cerebro to convert user coordinates into a commercial location, which is stored in MongoDB.
Related Items:
JanusGraph Picks Up Where TitanDB Left Off
Neo4j Touts 10x Performance Boost of Graphs on IBM Power FPGAs
Graph Databases Everywhere by 2020, Says Neo4j Chief
April 22, 2021
- CluedIn Announces Series A Investment to Bring Modern Master Data Management to the Masses
- Tray.io Announces New Capabilities for Automation of Real-Time Event Streams
- Teradata Announces Preliminary First Quarter Fiscal 2021 Financial Results and First Quarter Fiscal 2021 Earnings Release Date
- Utopia AI Determines Amounts of Hate Speech on Different Social Platforms
- SnapLogic Announces Support for Amazon Redshift Console Program
- Kyligence Raises $70 Million Series D Funding Round
- BigID Welcomes $30M Investment from Advent International, Valuing BigID at $1.25B
- Varada Delivers 100x Speed Improvement on 10x More Data in Security Data Lakes
- RapidAPI Raises $60M to Support Developer Growth and Fuel Expansion of Leading API Platform
- Exasol and DataSwitch Join Forces to Help Customers Accelerate their Cloud Modernization Journeys
April 21, 2021
- CUHK Research Team Develops an AI System for Detecting COVID-19 Infections
- DataRobot Names New Global AI Ethicist
- Oracle’s GoldenGate Now Available as an Elastic Pay-As-You-Go Cloud Service
- EU Commission Proposes New Rules for Excellence and Trust in Artificial Intelligence
- Digital Asset Raises $120 Million Growth Round to Expand Daml Data Network
- Neuravest Launches Data Refinery, Consolidating Alternative Data Providers for Investment Portfolios
- TigerGraph Unveils TigerGraph Cloud on Google Cloud Platform and Expanded Global Developer Community
- Hive Announces Series D Funding to Unlock the Next Wave of Intelligent Automation with AI
- Qumulo Expands Global Presence to Asia Pacific, Expands Strategic Partnership with HPE
- FIDO Alliance Creates New Onboarding Standard to Secure Internet of Things
Most Read Features
- Big Data File Formats Demystified
- Synthetic Data: Sometimes Better Than the Real Thing
- Can Digital Twins Help Modernize Electric Grids?
- Who’s Winning In the $17B AIOps and Observability Market
- Why Data Science Is Still a Top Job
- A ‘Glut’ of Innovation Spotted in Data Science and ML Platforms
- He Couldn’t Beat Teradata. Now He’s Its CEO
- Cloud Data Warehousing: Understanding Your Options
- What’s the Difference Between AI, ML, Deep Learning, and Active Learning?
- Big Data Predictions: What 2020 Will Bring
- More Features…
Most Read News In Brief
- Data Prep Still Dominates Data Scientists’ Time, Survey Finds
- AWS Adds Explainability to SageMaker
- Databricks Edges Closer to IPO with $1B Round
- Insightsoftware Loads Up on Embedded Analytics with Logi, Izenda Deals
- Global DataSphere to Hit 175 Zettabytes by 2025, IDC Says
- The Union of Salesforce, Tableau Yields Hybrid ‘Business Science’
- Esri Simplifies Developer Access to Location Data with ArcGIS Platform
- Dataiku Gets Closer to Snowflake
- Domo Gets the Lead Out with a ‘Palooza
- Fiverr Adds Data Science Recruiting Category
- More News In Brief…
Most Read This Just In
- Novel Use of 3D Geoinformation to Identify Urban Farming Sites
- Tecton Unveils Major New Release of Feast Open Source Feature Store
- SC21: Introducing the [email protected] Data Science Competition
- KIOXIA’s PCIe 4.0 NVMe SSDs Now Qualified with NVIDIA Magnum IO GPUDirect Storage
- Domino Data Lab Debuts New Solutions with NVIDIA to Enhance the Productivity of Data Scientists
- ThoughtSpot Acquires SeekWell to Operationalize Analytics, Push Cloud Data Insights to Business Apps
- Trifacta Announces Industry’s First Data Engineering Cloud
- Crate.io Expands CrateDB Cloud with the Launch of CrateDB Edge
- Alteryx Global Inspire 2021 Conference to Showcase New Products in Analytics and Data Science
- MinIO Enables IT to Manage Kubernetes-native Object Storage
- More This Just In…
Sponsored Partner Content
Sponsored Whitepapers
Contributors
Featured Events
-
Data Science Salon – Applying AI & Machine Learning To Media, Advertising & Entertainment
May 4 - May 5 -
ASF Roundtable – Harnessing Big Data: Constructing Data Pipelines
May 13 @ 11:00 am - 12:30 pm