Confluent today unveiled a string of new enhancements for its Apache Kafka-based streaming data offering, including new source and sink data connectors, new controls for expanding and shrinking Kafka clusters, and enhanced data quality capabilities.
As the commercial entity behind Apache Kafka, Confluent sells multiple Kafka products. Its flagship offering is Confluent Cloud, a hosted Kafka environment that Confluent runs on behalf of its customers. It also sells Confluent Platform, a Kafka-based streaming data solution that customers can deploy and manage themselves.
With its Confluent Q1 ‘22 Launch of Confluent Cloud, the San Francisco company is rolling out several new features to its hosted cloud offering. While the Confluent Platform accounts for the bulk of the company’s revenues, it views Confluent Cloud as its primary growth vehicle going forward.
At the top of the list of new features are a host of new source and sink data connectors for Microsoft Azure Synapse Analytics, AWS’s Amazon DynamoDB, Databricks Delta Lake, Google Cloud’s BigTable, and Redis. All told, Confluent has 50 pre-build data connectors, the company says.
Confluent also announced new integrations with observability platforms Datadog and Prometheus. “With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use,” the company says.
The company is also rolling out new cluster options for customers that are concerned about over-provisioning their Kafka clusters. Called dedicated clusters, the new offering can be provisioned on demand with just a few clicks, Confluent says, and also includes “self-service controls” that allow users to add and remove capacity as needed using the GUI, an API, or the command line interface.
“With automatic data balancing, these clusters constantly optimize data placement to balance load with no additional effort,” the company says. “Additionally, minimum capacity safeguards protect clusters from being shrunk to a point below what is necessary to support active traffic.”
Confluent says the new dedicated clusters, when paired with its new Load Metric API, gives organizations the real-time information they need to make decisions on when to expand and when to shrink capacity. “With this new level of elastic scalability,” the company says, “businesses can run their highest throughput workloads with high availability, operational simplicity, and cost efficiency.”
The third major new feature added to Confluent Cloud is schema linking, which Confluent positions as a data quality capability that should give customers the confidence that data streamed across cloud and hybrid environments is compatible and trustworthy.
When paired with the cluster linking capability that Confluent delivered in the fall, schema linking will help to maintain high data integrity while providing real-time failover capabilities and disaster resiliency.
Confluent Ships ‘Cluster Linking’ in Kafka Platform Update
Confluent Raises More Than $800M in IPO
Intimidated by Kafka? Check Out Confluent’s New Developer Site