Follow Datanami:
December 22, 2016

Big Data Speaks: 10 Industry Predictions for 2017

(Nikki Zalewski/Shutterstock)

What will happen in big data and advanced analytics next year? Nobody really knows. While crystal balls are generally unreliable indicators of future events, it’s never stopped us from trying to see into the future anyway. With that said, here’s our first batch of 2017 big data predictions from 10 prominent names in the field.

If there was a major theme that dominated 2016, the reemergence of artificial intelligence would have to be right up there. According to John Schroeder, executive chairman and founder of MapR Technologies, that trend will continue in 2017.

“In the 1960s, Ray Solomonoff laid the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction,” Schroeder writes. “In 1980 the First National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford and marked the application of theories in software. AI is now back in mainstream discussions and the umbrella buzzword for machine intelligence, machine learning, neural networks, and cognitive computing. Why is AI a rejuvenated trend? The three V’s come to mind: Velocity, Variety and Volume. Platforms that can process the three V’s with modern and traditional processing models that scale horizontally providing 10-20X cost efficiency over traditional platforms. Google has documented how simple algorithms executed frequently against large datasets yield better results than other approaches.”

You can expect to see a fresh wave of merger and acquisition (M&A) activity among AI startups and established giants, predicts Reltio Chief Marketing Officer Ramon Chen.

(enzozo/Shutterstock)

“There’s no doubt that there’s a massive land grab for anything AI, machine learning or deep learning,” Chen tells Datanami. “Major players as diverse as Google, Apple, Salesforce and Microsoft to AOL, Twitter and Amazon drove the acquisition trend this year. Due to the short operating history of most of the startups being acquired, these moves are as much about acquiring the limited number of AI experts on the planet as the value of what each company has produced to date. The battle for AI enterprise mindshare has clearly been drawn between IBM Watson, Salesforce Einstein, and Oracle’s Adaptive Intelligent Applications. What’s well understood is that AI needs a consistent foundation of reliable data upon which to operate. With a limited number of startups offering these integrated capabilities, the quest for relevant insights and ultimately recommended actions that can help with predictive and more efficient forecasting and decision-making will lead to even more aggressive M&A activity in 2017.”

You may have noticed that “things” around you seem to be getting smarter. The culprit? The growing ubiquity of machine learning, argues Balaji Thiagarajan, the group vice president of big data at Oracle.

“Machine learning is no longer the sole preserve of data scientists,” he writes. “The ability to apply machine learning to vast amounts of data is greatly increasing its importance and wider adoption. We can expect a huge increase in the availability of machine learning capabilities into tools for both business analysts and end users—impacting how both corporations and governments conduct their business. Machine learning will affect user interaction with everything from insurance and domestic energy to healthcare and parking meters.

Despite the FUD regarding security problems in Hadoop and Spark, you can expect big data tech’s dynamic duo to do great things, predicts Unravel Data CEO Kunal Agarwal.

“In 2017 we will notice that Big Data is beginning to cross a chasm into mainstream notoriety as a result of the popularity of Hadoop and Spark,” he writes. “Now more than ever, companies are utilizing Big Data technology for mission critical needs when running their data stacks. It is interesting to note, that these are the same companies who would normally have issues with the security threat propaganda that has plagued Hadoop and Spark, but are now becoming comfortable with the idea of experimenting on their data stacks. What this means moving forward is that we have only touched the tip of the iceberg for what Hadoop and Spark are capable of when companies start to run their own mission critical data stacks.

2017 will mark the beginning of the end of data warehouses as demand for real-time insights increases, says Dr. William Bain, CEO and founder of ScaleOut Software.

“For the last several years, the big data revolution has popularized Hadoop and other technologies that capture business intelligence in the data warehouse,” Bain writes. “While there continues to be a place for business intelligence to perform ‘after-the-fact’ analysis of historic data and inform strategic decision making, businesses also need to analyze live streams of fast-changing data in order to generate immediate feedback that boosts ROI. We call this ‘operational intelligence,’ and it picks up where business intelligence leaves off. The need for operational intelligence to maximize competitiveness will drive its adoption in a wide range of industries, including e-commerce, finance, manufacturing,  patient-monitoring, transportation, and utilities. In 2017, we expect to see widespread integration of this exciting capability into live systems.”

With news of Russian hackers interfering with the election and Yahoo’s lost records on a billion people, it’s not a stretch to say that security will be headline material in 2017. Chris Pogue, the CISO at Nuix, thinks it’s high time for the federal government to play a bigger role.

(GlebStock/Shutterstock)

“Organizations in the United States have to understand and adhere to up to 47 different state breach disclosure notification laws,” Pogue writes. “That’s right, forty-seven. A federal standard would go a long way toward simplifying the process for organizations that happen to be compromised, yet no federal legislation is anywhere in sight. Creating a federal standard needs to be a priority, sooner rather than later, to eliminate unnecessary confusion during what is already a difficult time for organizations.”

Streaming analytics will become a default capability in 2017 and 2018, says Anand Venugopal, head of product for StreamAnalytix at Impetus Technologies.

“The rate of adoption [of streaming analytics] will be a hockey stick model and ultimately take half the time it has taken Hadoop to rise as the default big data platform over the past six years,” he writes. “Overall, enterprises leveraging the power of real-time streaming analytics will become more sensitive, agile and gain a better understanding of their customers’ needs and habits to provide an overall better experience. In terms of the technology stack to achieve this, there will be an acceleration in the rise and spread of the usage of open source streaming engines, such as Apache Spark Streaming and Apache Flink, in tight integration with the enterprise Hadoop data lake, and that will increase the demand for tools and easier approaches to leverage open source in the enterprise.”

The cloud is already a major player in storing data and running analytic workloads. And when clouds get juiced with big GPUs, it will rain insights, according to Kinetica’s Vice President of Global Solutions Engineering Eric Mizell.

“Amazon has already begun deploying GPUs, and Microsoft and Google have announced plans,” he writes. “These cloud service providers are all deploying GPUs for the same reason: to gain a competitive advantage. Given the dramatic improvements in performance offered by GPUs, other cloud service providers can also be expected to begin deploying GPUs in 2017.”

Think having your data in the cloud makes it safe? Think again, says Datos IO’s co-founder and CEO, Tarun Thakur.

“People may think backup and recovery is dead, but they are sorely misunderstood and the move to the cloud actually makes backup and recovery more important than ever to safeguard data,” Thakur tells Datanami. “Relying on the cloud won’t take care of everything! The need for backup and recovery will become very real as organizations continue betting on enterprise applications. Moreover, backup and recovery will take center stage as IT Ops and others in organizations have never stopped worrying about recovery, particularly as companies aggressively move toward modernized application and data delivery and consumption architectures. The likelihood of not knowing how to address or who to turn to in the event of an outage is just too great a risk.”

It’s no stretch to say that the Internet of Things (IoT) next year is likely to play a bigger role in not only generating data, but also consuming insights. Manufacturers would do well to jump on the IoT bandwagon sooner than later, advises Talend Chief Marketing Officer Ashley Stirrup.

“At least one major manufacturing company will go belly up by not utilizing IoT/big data,” he writes. “The average lifespan of an S&P 500 company has dramatically decreased over the last century, from 67 years in the 1920s to just 15 years today. The average lifespan will continue to decrease as companies ignore or lag behind changing business models ushered in by technological evolutions. It is imperative that organizations find effective ways to harness big data to remain competitive. Those that have not already begun their digital transformations, or have no clear vision for how to do so, have likely already missed the boat—meaning they will soon be a footnote in a long line of once-great S&P 500 players.”

That’s it for our first batch of big data predictions! To make sure you don’t miss the next installment, check our Datanami home page frequently, follow us on Twitter, @Datanami, or check out the Datanami Facebook page.

Related Items:

2016: A Big Data Year in Review

Is 2016 the Beginning of the End for Big Data?

 

Datanami