Follow Datanami:
May 16, 2022

How to Accelerate Time-to-Insight with Knowledge Graph

Eugene Linkov

Let’s talk about “how” your organization can accelerate time-to-insight and improve the decision making process with knowledge graph.

Knowledge graphs describe, represent and link all enterprise data, regardless of sources or structure. The promise of knowledge graphs is for data consumers, human and automated, enjoy visibility, access, and understanding of all the available information within an enterprise.

A variety of industries are achieving this today with a growing list of use cases. Here are just a few examples:

  • Healthcare: making smarter, informed patient diagnosis and treatment with AI-based recommendations.
  • Life Sciences: assembling research and sequencing genomes to accelerate drug discovery and approvals.
  • Manufacturing: improving end-to-end product and part reliability and quality.
  • Government: gaining a better understanding of security and risk threats, and improving operational or mission efficiencies.
  • Financial Services: bettering customer experiences, compliancies, and risk management.

It all starts with the foundation. Many perceive the graph data model as applicable to networks or similar constructs. But, the killer application of graph is rapid large-scale data integration. Here’s a proven process for realizing the promise of the graph data model, specifically using W3C, RDF and OWL.

Zippy Data On-boarding

Knowledge Graph technology has hit a tipping point to achieve near-real-time data on-boarding. It’s quite feasible to create knowledge graphs straight from relational, semi-structured and unstructured data sources — without needing to create copies of source data.

This means the promise of initial data integration is achievable at the click-of-a-button, creating an interconnected knowledge graph ready for analysis. Often true, robust data integration involves a hybrid approach of automation and human interaction to knit your data fabric.

We first normalize the syntactic structure and format heterogeneity into the RDF graph model. Then we harmonize the entities, their relationships and characteristics using the OWL knowledge representation model.

Rapid Data Processing

With the base knowledge graph created, we apply transformations, computations, and reasoning to blend and prepare partitions of the knowledge graph for business analytics and exposure as data services.

A robust implementation applies the blending steps in a way that does not affect the base knowledge graph content. This provides numerous advantages, such as a more modular design, multiple independent views, hypothesis testing and rapid application development.

Accelerated Analysis

After knowledge graph content is blended and prepared for consumption, a good knowledge graph solution allows for exploration of the entire knowledge graph in arbitrary combinations — on-demand. This means users are empowered to answer known and unanticipated questions.

Out-of-the-box, so to speak, a knowledge graph solution enables broad and deep descriptive analytics. From this foundation, organizations build predictive and prescriptive analytics. The value of the knowledge graph here is that a user, application or automated client can immediately access all relevant data in a single query. Robust solutions automatically generate queries based on user interactions.

The knowledge graph should expose data services to allow AI/ML and other intelligent software clients in a uniform and simple interface — one that is not necessarily domain specific.

Brisk Action

With the previous steps established, decision making becomes more dynamic, more informed and more adaptable. Because this superior knowledge graph implementation makes data accessible with the flexibility to change rapidly, it leads to improvements in time-to-decision. Since the knowledge graph includes potentially all data sources, decisions are more accurate and complete. The knowledge graph built using RDF and OWL expects change, so adding new data sources, changing schemas is natural. This means emergent requirements are easy, especially compared to traditional approaches.

Seriously, how long does this take?

You might be pleasantly surprised! As an example, the Anzo knowledge graph platform is deployed in production at a federal government organization. This organization uses Anzo to provide a 360-degree view of relevant information and to serve as the  primary data warehouse. The Program Architect said, “The 8-month delivery of the Analytics Platform is viewed as a major achievement by [our] Leadership. The platform needs to connect data across [our] entire supply chain.” They now add new data sources to the knowledge graph in weeks vs. months, and new data models and dashboards in days vs. weeks. The savings for the initial use cases implemented in the first 90 days of using Anzo was over $17M.