Follow Datanami:

2017 – AI, Deep Learning, and GPUs

Around the year 2017, something funny happened: People no longer talked as much about big data. Indeed, Gartner had already dropped “big data” off its hype curve. The idea of collecting loads of data on everything we do had become pervasive, so much so that Deloitte told us that data had become “like air.”

Absent a centralizing idea or rallying cry, the community formerly known as big data eventually settled on something else: AI. The exact time this switch to AI happened is not exact, but by 2017, the trend was firmly in place.

The forecasted growth in AI was stupendous. The market research firm Tractica (now part of Informa) predicted that AI spending would grow from $640 million in 2016 to $37 billion by 2025. Applications such as image recognition, algorithmic securities trading, and healthcare patient data management “had huge scale potential.” (In 2018, McKinsey told us that AI had the potential to generate $13 trillion in economic activity by 2030, but we didn’t know that back in 2017.)

Many of AI’s capabilities come together in self-driving automobiles, which were all over the news back in 2017. The ability to gather a large amount of data from Radar, Lidar, and visual sensors, and fuse it together in an intelligent way, was seen as the culmination of AI’s capabilities. (The fact that we still don’t have self-driving cars on the highways today in 2021 perhaps says more about the difficulty of this problem than the lack of capabilities in AI.)

The core technology behind the sudden rise in prominent of AI – deep learning – was also undergoing a period of rapid transformation around the year 2017. Armed with huge data sets and increasingly powerful GPUs, neural network architectures got bigger and better, helping to close the human-machine performance gap in two prominent AI workloads: natural language processing (NLP) and computer vision.

In 2016, Facebook CEO Mark Zuckerberg declared that we were just five to 10 years from NLP and computer vision systems that could exceed human capabilities. Thanks to large neural network models, we have already exceeded human capabilities in some areas (although the practicality of using these massive deep learning models is not entirely a solved problem).

At the 2017 GPU Technology Conference, Nvidia CEO Jenson Huang first said that we were in the midst of a “Cambrian explosion” of deep learning technologies. Starting with AlexNet in 2012, the field of deep learning had grown quite wide, with recurrent networks, convolutional neural networks, generative adversarial networks, reinforcement learning, and neural collaborative filtering building upon each other to advance the state of AI at a rapid clip. “Neural networks are growing and evolving at an extraordinary rate, at a lightening rate,” Huang said at the GTC one year later.

Nvidia stock was trading at $25 per share at the start of January 2017, the beginning of a remarkable run for the GPU vendor. Today, it’s stock is trading around seven times that amount, giving the company a $520 billion market valuation as demand has skyrocketed for its GPUs, which are favored for training the deep learning models behind the most advanced AI applications.

Many chipmakers have attempted to unseat Nvidia as the king of AI chips. We recall Google’s foray into Tensor Processing Units (TPUs), which the company formally introduced into its cloud in May 2017. The TPUs were said to be 15 to 30 times faster at inference than NVidia’s fastest GPU of the day, and even more performant when performance-per-watt was measured.

And then there was the Intelligent Processing Unit (IPU), which the UK startup Graphcore was developing at the time (in fact, it’s still developing them). Graphcore’s IPUs can plug into traditional X86 servers and run standard machine learning workloads developed in languages like TensorFlow and MXNet.

Today, it’s hard to remember that “big data” was such a big thing, because everybody seems to be chasing the AI possibility these days. At the end of the day, what really moves the needle are folks who have found interesting things to do with data. Whether you call that big data, AI, or cognitive computing isn’t as important as what that thing is. That, of course, is the mission of Datanami, and it’s what we’ve dedicated the 10 years to telling you all about. Thanks for reading.

Related Items:

2020 – COVID-19 — Kicking Digital Transformation Into Overdrive

2019 – DataOps: A Return to Data Engineering

2018 – GDPR and the Big Data Backlash

2016 – Clouds, Clouds Everywhere

2015 – Spark Takes the Big Data World by Storm

2014 – NoSQL Has Its Day

2013 – The Flourishing Open Source Ecosystem

2012 – SSDs and the Rise of Fast Data

2011 – The Emergence of Hadoop

Datanami