Follow Datanami:
October 4, 2016

Acquisition Validates Concord’s Event-Based Approach

via Shutterstock

We suggested during the summer that companies looking for a stream-processing engine for fast data applications featuring high-throughput and low-latency may want to check out startup Concord Systems Inc.

Someone did: Akamai Technologies Inc. (NASDAQ: AKAM) announced last week it is acquiring New York City-based Concord in an all-cash deal. Akamai, the content delivery network services vendor based in Cambridge, Mass., said the acquisition would complement its existing platform data processing capabilities while boosting its Internet of Things product roadmap.

Along with offering lower latency in the range of tens of a millisecond, Concord has managed to differentiate itself from the growing number of stream processing frameworks by adopting an event processing approach built on C++ that runs on top of the Apache Mesos cluster manager.

In announcing the deal, Akamai said it would use Concord’s framework “across a number of different use cases” to aggregate, filter and analyze widely distributed big data. Concord brings a new “approach to some of the challenges and hard problems related to processing big data,” Ash Kulkarni, senior vice president of Akamai’s Web Experience Division, noted in a statement. “Their team has developed critical software that enables the dynamic deployment of customer logic at scale.”

Along with IoT applications, Akamai, Cambridge, Mass., will use Concord’s distributed stream processing framework to boost the performance of its web and mobile media delivery platforms. Unlike Spark Streaming, which uses a micro-batch paradigm. Concord’s jobs (based on “operators” created by developers) also run as a containerized service on Mesos, which is emerging as a viable alternative to Hadoop.

Along with flexible APIs for stream processing, another attribute of Concord’s event-based processing approach is its integration with a distributor router inside each task executor. That, the startup claimed earlier this year, yields a 10-fold performance increase over Spark Streaming and Apache Storm.

Concord also touted its dynamic topology model as more flexible than competing frameworks that often impose rigidity. From message queue to database, that means its stream processing framework need not be restarted when deploying or killing running jobs. “You can deploy a new operator that also consumes from the same streams of data, without affecting the other jobs that are running,” Concord cofounder Shinji Kim told Datanami earlier this year.

Additional flexibility is provided via client API support for several languages beyond C++, including Go, Java, Python, Ruby and Scala.

Kim said in a blog post that her small team that also includes Concord co-founder Alexander Gallego would be joining Akamai’s Platform Engineering unit, “where we will be building a new real-time event processing platform using Concord’s technology.”

Recent items:

Concord Claims 10X Performance Edge on Spark Streaming

Merging Batch and Stream Processing in a Post Lambda World

 

Datanami