Follow Datanami:
February 10, 2017

Will Hardware Drive Data Innovation Now?

The world has witnessed an unparalleled flowering of technical innovation that has impacted billions of people over the past decade, much of it driven by creative breakthroughs in software. But now tech leaders are looking for hardware to take the torch and drive the next wave of innovation on even bigger data looming over the horizon.

We’ll have 40 zettabytes of data on the books by 2020, according to IDC and Dell, and upwards of 80% of that will be in an unstructured format, such as pictures, video, text, and audio recordings. Technology futurists see souped-up machine learning programs called deep recurrent neural networks, or “deep learning,” as the most likely way humans will be able to keep up with the deluge.

Deep learning promises to help us unlock insights hidden in all that data, but we don’t yet have processors architectures available to let us effectively use it, says Anand Babu (AB) Periasamy, the creator of Gluster and co-founder and CEO of Minio.

“Deep learning is not a [over-hyped] trend. It’s very real. The benefits are huge,” Periasamy says. “The reason why it’s no longer hype is we’re seeing promising results, very early on. The best money in the industry is now turning and focusing on deep learning.”

GPUs are drawing intense interest for deep learning workloads

Minio just released its open source object file system to GA last week, and now is building a deep learning engine called X that aims to make deep learning accessible to a new class of developers. With the coming crush of data from the IoT, Periasamy says new processor and hardware architectures will need to be created to process all that data.

It’s “absolutely” time for hardware to center stage, Periasamy told Datanami in a recent interview. “I’m seeing a future where deep learning processors become mainstream processors, and Xeon processors become co-processors to load the system,” he says.

Periasamy sees three promising architectures to drive deep learning workloads: GPUs, Intel Xeon Phi, and massively multi-core ARM chips.

“GPU is good at vector processing, but GPUs are still meant for gaming cards, and they’re slowly marching into a server grade card,” he says. “Xeon Phi takes a different approach. It uses many Xeon-like cores, but keeps it low power, and each core has a deeper and wider instruction. It’s very different than vector processing. Intel’s approach is if you can general purpose code heavily multi-threaded, then that’s the way to go.”

ARM is the last one in this game, but they haven’t started thinking about deep learning, Periasamy says. “ARM has potential to actually go into the deep learning stuff using a Xeon Phi-like architecture: multi-core, CPU instruction. That will be very ideal,” he says. “A 1,000-core ARM chip meant for deep leering would be great, but they don’t’ exist today.”

Intel says 3D XPoint technology will debut by summer as Optane expansion cards

Another software developer who has his eye on the evolution of hardware is Peter Wang, co-founder and chief technology officer of Continuum Analytics. According to Wang, the rise of memory-class storage, like the 3D XPoint non-volatile memory (NVM) technology unveiled by Intel and Micron in July 2015, holds the potential to revolutionize data analytics.

“It’s 1,000x faster than SSDs. 10x slower than main system memory, but 10x the density of main system memory,” Wang says. “If you have 16GB of memory on your laptop, when that stuff rolls out, you’ll have 160GB of persisted memory type stuff.”

Last month, Intel announced that the 3D XPoint technology will debut in the second quarter with its 16GB and 32GB Optane expansion cards. It will take a while for the technology to roll out and become mainstream in the IT world. “The changes won’t be felt for another two to three years,” Wang says. “But once it gets here, it changes the landscape of computing forever.”

Software developers will also need to adapt their mindsets to new hardware emerging at the periphery. The miniaturization of data collection systems will ensure that a steady stream of data flows in from the IoT for deep learning systems to work on.

Drones made a show of force in Lady Gaga’s Super Bowl LI halftime show.

The drone swarm featured in Lady Gaga’s Super Bowl LI halftime show is a testament to the rising power of edge computing. “The amount of processing power you can stick on something the size of a thumb drive is insane,” says Wang, who studied physics at Cornell and has a number of scientific computing and visualization products to his name. “You can have something the size of this coffee cup that’s a rich sensor platform, multi-spectral, with GPU-accelerated compute on there.

“You can put that on the edge, throw it off the side of a plane–you can throw it anywhere and it will have some level of long-range transmit and receive capabilities,” he continues. “The kinds of data computation you can do at the edge is significant. And that kind of topology changes how we think about deploying data science models.”

Hardware and software go hand-in-hand; you can’t really have one without the other. But they don’t necessarily evolve at the same pace. Sometimes the pace of evolution on the software pushes the hardware to get better, and sometimes it takes time for developers to fully utilize advances in hardware. We’re currently seeing rapid evolution at both ends of the spectrum. But judging by what some computing experts are seeing, we’re due for some pretty big advances in hardware that will change what’s possible with software.

Related Items:

Intel Exec: Extracting Value From Big Data Remains Elusive

Exactly Once: Why It’s Such a Big Deal for Apache Kafka

Solving Storage Just the Beginning for Minio CEO Periasamy

 

Datanami