Elasticsearch Plucks VP from Competitor Splunk
Splunk has enjoyed the first-mover advantage when it comes to analyzing machine-generated data for fun and profit. But as the Internet of Things begins to take off and the machine-generated data seriously begins to fly, the developer of proprietary software is finding increased competition from the open source realm, namely from Elasticsearch, which just snatched away a Splunk VP.
Frustrated by the lack of scalable search engines, Shay Banon, the creator of the Compass search engine, decided to do something about it. He took the same Lucene code that underpinned the Compass product, and tweaked it to run on in a multi-tenant architecture. He also added a REST API and the capability to store files as JSON documents, and voila! Elasticsearch (the open source product) was born.
That was 2010. Four years later, Banon is the CTO of Elasticsearch, an up-and-coming commercial open source software company with operations in Amsterdam, the Netherlands, and Los Altos, California. The company’s flagship offering is the ELK stack, which consists of three open source products, including the Elasticsearch search engine, the Logstash log data cleansing solution, and the Kibana data visualization engine. Together, the ELK stack components gives users the ability to search and perform analytics across many types of structured and unstructured data.
With the ELK stack, Elasticsearch has set its sights on becoming a new big data platform, and perhaps displacing more established companies like Splunk, NoSQL database vendors, and the Hadoop vendors. The company has big momentum behind its product, which it says is being downloaded 500,000 times per month. Target, Bloomberg, The New York Times, Facebook, GitHub, Netflix, Yelp, and Verizon are all using Elasticsearch, the company says.
The company, which is backed by Benchmark Capital and Index Ventures, has made some high profile hires over the past 12 months, including veterans from VMware, Citrix, and Box. And this week, the company launched a shot across Splunk’s bow when it announced the hiring of Gaurav Gupta, the former vice president of products at Splunk, to be the first vice president of products for Elasticsearch.
Gupta, who also worked at Google and Gateway, says he’s thrilled to be joining the company at such an exciting time. “It’s obvious Elasticsearch is building something truly disruptive,” he said in a statement.
While the company is seeking to displace Hadoop as an analytics engine, it’s also pursuing a co-existence strategy with the big yellow pachyderm. Earlier this month the company debuted a new release of its Elasticsearch for Apache Hadoop integration product, as well as a new partnership with Hadoop distributor MapR Technologies.
In a blog post, Elasticsearch says it has worked with MapR to help a large financial institution index, search, and visualize billions of documents stored in Hadoop. When used in this context, MapR provides the batch-oriented analytical processing for Hadoop-resident data, while Elasticsearch provides the real-time insight into data that end-users so often demand. Best of all, the company says, the way that Elasticsearch was built enables both traditional Hadoop workloads (MapReduce, Hive, and Pig) and Elasticsearch jobs to run simultaneously without impacting each other.
As the big data space matures and the IoT starts generating truly astonishing amounts of data, the market will require additional ways to access and analyze data. We’re moving quickly away from the days when the only way to operate on big data was to hire a data scientist to write a MapReduce routine. People will increasingly want familiar and easy-to-use ways to interrogate their data, and Elasticsearch is seeking to provide that. By the way, so are Splunk, Lucidworks, and many others. It will be interesting to see how it shapes out.