Kinetica Joins Growing List of Kafka Connectors
As large enterprises move big data via Apache Kafka, the open source data-messaging platform is gaining more adherents in response to Kafka pioneer Confluent Inc.’s release of a commercial platform for connecting data inside big companies.
The latest is Kinetica, the in-memory analytics database vendor, which announced this week it has completed development and certification of its Kafka data connector under the Confluent partner program. The creators of Apache Kafka working at LinkedIn (NYSE: LNKD) formed Confluent in 2014.
San Francisco-based Kinetica said Tuesday (Feb. 28) its certified Kafka connector would allow customers to read and write data between Kafka and its analytics database. Available now, the connector is intended to allow customers to ingest real-time data streams from Apache Kafka for analysis on its GPU-accelerated database.
The company added that its Kafka connector can be deployed into a Confluent cluster via a control interface or command line using the Kafka Connect RESTful API. The Kafka API ensures integration between the Kafka topic stream and the Kinetica instance.
Along with leveraging the company’s GPU-based in-memory database, Kinetica stressed that the Kafka connector would link users to machine learning, deep learning and online application processing on real-time, streaming data. The connector is also touted as providing improved data integration based on Kafka’s connector API along with the ability to build stream-processing applications with the Kafka streams API.
Chris Prendergast, Kinetica’s vice president of business development and alliances, noted in a blog post that retailers could, for example, use its Kafka connector to capture real-time, streaming geospatial data from shopper’s mobile phones as Kafka streams, “combine it with customer loyalty data in Kinetica, and push out targeted, personalized, location-based offers through mobile apps.”
The company described its data link as consisting of two components for integrating the Kinetica database with Kafka. The first is a “source connector” that receives a data stream from the database. Data is then streamed in a Kafka format that delivers one field for each table column. A separate Kafka topic is created for each database table configured through the connector.
The second component is a Kafka “sink connector” that receives a data stream from the aforementioned Kafka source connector and writes it to the Kinetica database.
Kinetica joins a growing list of Confluent partners including Amazon Web Services (NASDAQ: AMZN), DataStax, Microsoft Azure (NASDAQ: MSFT), MongoDB, Splunk and others.
The source code for the Kinetica Connector is available here.