Follow Datanami:
February 1, 2016

Knowing What’s Possible a Big Obstacle for Big Data

(Kankariya/Shutterstock.com)

There are many reasons why a given organization may be lagging on big data analytics adoption. The shortage of data scientists is one, and the tightening of budgets is another. But according to analytic experts, one of the biggest obstacles to adopting big data is just knowing what is possible.

“The biggest obstacle we’re running into is not knowing what’s possible. This is the single biggest problem we’re running into,” says Praveen Kankariya, the founder and CEO of Impetus Technologies, a developer of streaming big data analytic software and services based in Los Gatos, California.

There are a host of well-worn big data use cases that play across multiple industries, such as reducing customer churn, delivering personalized recommendations, and detecting fraudulent transactions. But all too often, executives have the mistaken impression that even these established paths are beyond their means. In fact, thanks to the democratization of big data technologies driven by the open source community, the solutions are often within reach.

“People don’t know that it’s doable and one-tenth the order of magnitude of what they imagine, or one-hundredth,” says Kankariya, whose company plans to hire 150 data scientists, analysts, and data experts in the U.S. this year. “People become resigned to a certain way. They say ‘Let’s not go in that direction, it’s not possible.’ Starting a conversation in these large enterprises about whether this is doable or not doable” is one of Impetus’ biggest focuses this year.

Nitin Mittal, principal in Deloitte Consulting‘s analytic practices, echoes that sentiment, saying about one-third of Deloitte’s clients request a high degree of hand-holding when it comes to beginning their analytics journey.

“We do have many requests where the clients are saying, ‘Tell me what the art of the possible is with these types of system and technology, and how can I apply it to my context?'” Mittal says.

Mittal advocates that companies start by setting up an analytic innovation studio or lab, and building from there. “It allows clients to experiment and be innovative, be creative, and achieve a proof of concept or a pilot that would then be touted as a proof point that was successful,” he says.

Having an experienced big data analytics expert guide you in the art of the possible is clearly a good approach, especially considering the tremendous pace that big data technology is evolving. The big elephant in the room, Apache Hadoop, which Yahoo first started using 10 years ago, is quickly becoming a must-have platform in the enterprise. That’s why Forrester predicts the entire Fortune 500 will soon use it. But Hadoop is just the tip of the iceberg, and new big data analytic technologies–Spark, Apex, Kafka, Mesos, Bananas, Hoopla, etc.– appear on the scene every day.

Keeping up with the data analytic capabilities of the day is not easy, especially for a business person with a job to do. Phrases like “think outside the box” and “reimagine the future” may make great motivational posters, but they aren’t business plans that can be executed. But as “digital native” companies like Uber and Airbnb show how a strategic application of data can disrupt entire industries, it’s apparent that businesses ignore big data analytics at their own peril.

Big data can open up so many potential directions that it's overwhelming at times (Lightspring/Shutterstock.com)

Plotting a big data strategy can feel overwhelming  (Lightspring/Shutterstock.com)

Many analytic engagements start with a problem to solve. And usually, the more specific the problem, the easier it is to use analytics to find an answer. But according to Gurjeet Singh, the founder and CEO of topological data analytics (TDA) software vendor Ayasdi, sometimes it’s best to start with the data itself, and work backwards from there.

“Here’s what’s not true: The idea that you form the set of questions before you do the work. That’s the idea I’m pushing against,” Singh told Datanami in a recent interview. “In fact, in most of the cases where customers have been very successful with our software….they have a lot of exhaust data sitting that they didn’t even think of using for the problem.”

This is one of the core dilemmas of big data. In the old days, data was hammered into specific formats to answer specific questions. Business practitioners knew what sorts of questions they were facing, and so the tools were created to answer those using the data that’s available. It was all very well defined and understood.

But things aren’t as clear-cut in the big data paradigm. In big data, the data is left in its original format for as long as possible, in part because it’s too expensive to format it, whether it sits in Hadoop, a NoSQL database, or an object-based file system. But the data is also left in a more raw format so that, in the future, it can be used to answer questions that we don’t know we want to ask yet. This is simultaneously a powerful capability that opens up new possibilities for data exploration, and an intimidating reality that can kill momentum.

This novel aspect of big data is feeding a demand for big data technical services up and down the spectrum, for staff scientist at Fortune 500 companies and the services of firms like Ayasdi, Impetus, and Deloitte. As Kankariya sees it, it’s also slowing the adoption of big data analytics in the enterprise.

“People don’t know what’s possible, and even once you make them aware, they have to have the confidence to put their reputations on the line,” the Impetus CEO says. “A lot of enterprise IT executives and even business stakeholders say this all very promising, it all looks pretty in Silicon Valley, but will it work in my environment? As opposed to making a decision, they let the experimental work simmer for longer.”

Related Items:

Data Scientists: The Myth and the Reality

Finding Your Way in the New Data Economy

10 Tips for Beginning Your Big Data Journey

Datanami