Follow Datanami:
June 29, 2017

Kinetica Gets $50M for Converged GPU Analytics

sakkmesterke/Shutterstock.com

(sakkmesterke/Shutterstock)

Kinetica’s bold plan to build a converged real-time analytics platform that uses GPUs and in-memory techniques to power existing SQL queries alongside deep learning algorithms got a big boost today when it disclosed a $50 million Series A investment from venture capitalists.

The use cases for Kinetica‘s technology have evolved over time. It was initially incubated by the US Army and the NSA to build a system capable of tracking terrorist movements in real time, using data from drones and signals intelligence. After changing its name from GIS Federal and moving from Virginia to San Francisco, the company expanded into the enterprise market, and one of its first customers was the US Postal Service, which uses Kinetica to track and analyze the movement of about 200,000 vehicles at one-minute intervals.

Kinetica’s capability to speed SQL queries on top of superfast GPU processors has always been one of its main calling cards. It’s what led customers to use its database, dubbed GPUdb, to speed up slow Tableau or Qlik queries, or to provide a real-time analytics layer on top of Hadoop. But last year’s introduction of user defined functions paved the way for Kinetica users to integrate machine learning and deep learning libraries, including TensorFlow, Caffe, and Torch, into the platform.

“We’re evolving as a company as well as technology,” Kinetica Co-Founder and CEO Amit Vij tells Datanami. “We’re becoming a converged platform that enables an enterprise to use a single unified technology to access a relational database and run various complex algorithms, analytics, and BI.”

An ‘Apple’ Experience

The idea is positioning GPUdb as single platform that can execute a wide range of analytic workloads that today are loosely referred to as “big data analytics.” Or in Vij’s words, it’s about “creating an Apple experience” for his customers.

Kinetica aims to help customers combine traditional SQL and newer deep learning workloads on the same GPU clusters

It’s all about taking what was complex and difficult, and making it simple and easy, Vij says.

“It feels like organizations are having to duct tape five to 10 technologies that were loosely created on different release cycles and then spend several months, if not several years trying to put it into production,” Vij says, “and it’s still batch oriented and they don’t get real time results for their company.”

Hadoop was supposed to be that central platform for big data analytics, Vij acknowledges, but for whatever reason, it hasn’t worked out. “Hadoop is still fundamentally a good file system,” he says. “But [the Hadoop solutions] weren’t created to be an in-memory database, whereas that was our sole purpose when we first started.”

Hadoop is still good, Vij continues. “For organizations that have massive amounts of data, it’s an excellent place to store data and have the data lake,” he says. “We integrate with Hadoop, and we’re a fast layer that can be architected…on top to provide that real-time analytics for specific problems.”

Broad GPU Appeal

Kinetica today sees three primary use cases for its in-memory, GPU-accelerated relational database.

The first one is providing location-based analytics, which is applicable across industries and the government. The second one is speeding up OLAP queries submitted by users of BI tools. The third is the newest and involves incorporating the latest machine learning and deep learning algorithms into the data analytics workflow.

Vij elaborated on how deep learning and AI approaches will mesh with the Kinetica platform:

“If you have a multi-billion row dataset in Kinetica, you’re going to run various database filters and aggregates and create your training data set,” he says. “You take this new data set formulated by the database, and create a trained model within Google TensorFlow. And now you can execute that model against a table within Kinetica or a new materialized view that’s created.”

One Kinetica customers is a bank that’s using this approach to power its daily financial risk exposure calculations. They were using a combination of tools, including models written in Python and executed on Spark, but they were only able to work on a subset of the data.

“We enable this organization to do this all in real time and operate on the entire data corpus,” Vij says. “Because there’s a reduced amount of complexity in installing and working with our technology, organizations in turn need less personnel. You don’t need a PhDs that’s expert in Hadoop and another one on Spark and another in ML. You can condense all of that.”

Analytics Looking Forward

The Series A investment of $50 million was co-led by Canvas Ventures and Meritech Capital Partners. Vij says the company plans to use the resources to fuel engineering, sales, and marketing initiatives. On the engineering front, the company plans to bolster its SQL compliancy, work on further integrating machine learning and AI into the platform, and push deeper into the cloud.

Vij says the biggest bottleneck at this point is getting NVidia GPUs into its client data centers. That’s why the recent adoption of GPUs by cloud providers has been such a good thing for Kinetica’s business model.

“With our smaller customer, the folks who can’t afford an IBM Netezza, SAP HANA, or Oracle Exadata, many times they start out on one to two servers on the cloud,” Vij says. “Things have definitely evolved.”

Related Items:

Kinetica Aims to Broaden Appeal of GPU Computing

How GPU-Powered Analytics Improves Mail Delivery for USPS

GPU-Powered Terrorist Hunter Eyes Commercial Big Data Role

 

Datanami