Follow Datanami:
April 2, 2019

Confluent Introduces Confluent Platform 5.2

April 2, 2019 — In a blog post today, Confluent introduced Confluent Platform 5.2. The blog post is included in part below.


We are very excited to announce the general availability of Confluent Platform 5.2, the event streaming platform built by the original creators of Apache Kafka.

Event streaming has become one of the few foundational technologies that sit at the heart of modern enterprises, redefining how you connect every existing application, while enabling you to build an entirely new category of applications—what we call contextual event-driven applications.

What do we mean by contextual event-driven applications? Well, infrastructure to support event-driven applications has been around for decades; mere messaging is nothing new. The main difference is that Confluent Platform gives you the power to process events as well as store and understand an application’s entire event history at the same time, adding critically important historical context to real-time event streaming.

Think of a retailer that builds an application that matches real-time online transactions with inventory information stored in a database to ensure the availability of purchased items. The event streaming platform is a new class of data infrastructure designed to enable the kinds of applications that organizations in every industry and across the globe are building now.

At Confluent, we are devoted to delivering an enterprise-ready event streaming platform that enables you to realize this technological shift at scale in an increasingly hybrid and multi-cloud landscape.

Confluent Platform 5.2 represents a significant milestone in our efforts across three key dimensions:

  1. It allows you to use the entire Confluent Platform free forever in single-broker Kafka clusters, so you are freer than ever to start building new event streaming applications right away. We are also bringing librdkafka 1.0 in order to bring our C/C++, Python, Go and .NET clients closer to parity with the Java client.
  2. It adds critical enhancements to Confluent Control Center that will help you meet your event streaming SLAs in distributed Apache Kafka environments at greater scale.
  3. With our latest version of Confluent Replicator, you can now seamlessly stream events across on-prem and public cloud deployments.

Accelerate the development of contextual event-driven applications

The true value of the event streaming platform is the new generation of contextual event-driven applications that you can build. We want to enable every developer to build on Apache Kafka using the powerful capabilities of Confluent Platform, and we’re doing this in a major way with the introduction of a new license just for developers.

At Confluent, we believe in the power of free and open software. Several components of Confluent Platform have always been available for free, such as KSQL, Schema Registry and REST Proxy, which are licensed under our Confluent Community License.

Historically, Control Center, Replicator, enterprise security features, and the other commercial features of Confluent Platform have had a free thirty-day Evaluation license. Often, though, that’s not long enough for you as a developer to experiment freely and figure out how you can use the entire platform to solve the next generation of problems you are facing.

This is why we’re excited to introduce the newly available Developer License, which allows you to run all commercial features of Confluent Platform for free on single-broker Kafka clusters. This means you now have access, without any time constraints, to tools such as Control Center, Replicator, security plugins for LDAP and connectors for systems, such as IBM MQ, Apache Cassandra™ and Google Cloud Storage. (This is great news in regard to our commercial features, but remember that you already had this benefit for Apache Kafka and our community features, which you can always use for free on an unlimited number of Kafka brokers.)

In recent releases, we have made major enhancements to Control Center directly aimed at application developers. With unrestricted access to Control Center, you will be able to browse messages within topics, view and edit schemas, add and remove connectors and write KSQL queries using the GUI, among other things. We think having access to this advanced set of capabilities long before you’re ready to deploy any code will help you build new contextual event-driven applications faster.

librdkafka is now 1.0, and so are the Confluent clients!

Confluent Platform 5.2 proudly introduces librdkafka 1.0. This is a big milestone, because it brings this popular client library closer to parity with the Java client for Kafka. Here are the high points:

  • Idempotent producer: Provides exactly once producer functionality and guaranteed ordering of messages.
  • Sparse connections: Clients now connect to a single bootstrap server to acquire metadata and only communicate with the brokers they need to, greatly reducing the number of connections between clients and brokers, and helping mitigate connection storms.
  • ZSTD compression: Provides support for the real-time compression algorithm maintained by Facebook. (You may have thought lossless compression was a solved problem, but this new scheme is a meaningful improvement.)
  • max.poll.interval.ms (KIP-62): Allows users to set the session timeout significantly lower to detect process crashes faster. Applications are required to call rd_kafka_consumer_poll()/rd_kafka_poll() at least every max.poll.interval.ms or else the consumer will automatically leave the group and lose its assigned partitions. With great timeout detection latency comes great responsibility.
  • librdkafka version 1.0.0: API (C and C++) and ABI (C) compatible with older versions, but note changes to configs (e.g., acks=all is now default).
  • Additional enhancements and bug fixes, all carefully documented in the release notes.

Because our Confluent clients for Python, Go and .NET are all based on librdkafka, each of them includes these improvements by virtue of this upgrade. On top of this, we’ve completed a major overhaul of the popular .NET client. The new API is significantly easier to use, more idiomatic and extensible. Some highlights:

  • It includes an AdminClient for working with topics, partitions and broker configuration
  • It provides more idiomatic and straightforward handling of errors
  • It is a powerful serialization API with explicit support for both async and sync serializers
  • Clients are now constructed using static configuration classes and the builder pattern
  • Once again, make sure to take a look at the full release notes

Now that we have equipped you with new ways to produce and consume messages, let’s talk about new ways to transform those messages using the power of stream processing and the enhancements we made to KSQL.


To read the full post, follow this link

Source: Confluent

Datanami