Follow Datanami:

People to Watch 2017

 

Naveen Rao
VP and GM of Artificial Intelligence Solutions
Intel

Naveen Rao received his PhD in neuroscience and was a researcher at Qualcomm before co-founding a deep learning startup called Nervana Systems, which Intel acquired for $400 million last year. Now as the head of Intel’s AI group, Naveen will be instrumental in helping the chip giant develop new architectures for emerging deep learning and AI workloads that promise to dominate the next generation of high performance computing.

Datanami: Congratulations on being named a Datanami Person to Watch in 2017! As we look forward to Intel’s Knight’s Crest release, what are some key advantages that will help Intel in its battle for dominance in the deep learning space, particularly against GPUs?

Naveen Rao: There are a few different things. Obviously Intel is a leader on the process technology side. One of the keys to the kind of compute that deep learning/AI applications require is high-density, low-power compute, and that’s something that Intel excels in top to bottom.

The next advantage is that we offer a full product portfolio for many different kinds of use cases, including things that work on the edge. We also have the connectivity, 5G, as well as the datacenter. So, we’re uniquely positioned to offer solutions that can see entirely end-to-end. Other organizations have built pieces here and there, but not really the whole end-to-end structure, and that’s where Intel can really do well and offer a significant advantage over other companies.

The last thing, which is more in the datacenter itself, would be the host processor as the key to making something work – you can’t run an OS on these dense compute platforms. Intel has basically the greatest market share in host processors, and integrating any ability into that is a huge advantage.

A huge problem today, from a technical perspective in a server, is just marshaling data around within the server itself: host processor memory, device memory, everything. We have a vision of actually unifying that and making it so that you don’t have to move data as much within the machine itself, and that will increase performance many, many-fold. It’s not just about how many FLOPS you can throw at a problem, it’s actually very much a memory problem as well. And that’s something we’re uniquely positioned for because of the ownership in the host processor space.

Datanami: What can you share with us regarding Intel’s roadmap for deep learning, especially in 2017?

We’ve disclosed the Lake Crest and Spring Crest line. So Lake Crest is really the processor that we were building at Nervana before the acquisition. We’re continuing that as part of Intel, and are building the follow-on to that called Spring Crest, which is slated for 2018. This will really be about bringing the latest process technologies to that same architecture.

Then this convergence between the host and the organic compute architectures will start happening in the follow-on to that in the 2019-2020 timeframe. We’re not disclosing specifics of what that will look like, but we are talking about it being converged between the host processor and the offload engine.

Datanami: What trends do you see as being most important for big data as we look to 2017 and beyond?

There are a number of them. On the hardware side, moving data less is going to be really important to achieving high performance. This is a concept that has been around for a long time within the high performance computing (HPC) circle, but we’re moving at an even more granular level within the server, and even within the chip and the board, et cetera. So you really need to think of the system and rack holistically, which is something Intel is really well-poised to do – for instance rack scale design is a big effort within Intel today. We’re integrating with all of those and trying to take a holistic approach to the hardware side of things: how to import data, process it, find useful inferences in it, and get that out into an application as fast as possible. And data sizes grow how well you can manage that – thinking about it from individual chips, boards, systems and rack – becomes extremely important.

On the software side there a lot of different trends we’re seeing. There’s the soft convergence of high performance computing ideas coming into the enterprise space. But some areas haven’t embraced the HPC approach yet. For example, datacenters in the enterprise still typically work on Ethernet, which presents challenges as far as distributing data, compute, et cetera. That’s led to Spark and Hadoop adoption, but there’s not a good fit there to make deep learning workloads go faster. I think there have to be some tweaks to those paradigms in the enterprise space that will probably borrow from the HPC space because they’ve done it better there.

So that’s something I see as a macro trend. Beyond that I think there are a lot of different factions going off right now, and it’s not clear how it’ll all shake out on the software side.

Datanami: We’re seeing this overlap between Big Data, HPC and advanced-scale enterprise computing come up more and more, actually, but there’s often a disconnect with enterprise users where they’ll embrace HPC approaches, but are hesitant to call it “HPC.”

It’s interesting because the needs are completely different between enterprise customers and HPC people. With HPC you have 60,000 Watts in a rack, and with enterprise you usually can’t break [20,000]. That presents a big challenge as to what you can do within that rack. Plus HPC typically isn’t as concerned with resilience – you basically assume everything is going to work 100 percent of the time. Whereas in enterprise you think “Well, I have a million servers that I can’t touch. I need resilience, I need redundancy, I need things that are cheap so that when they break I can replace them.” That’s the way they think – there’s no unlimited budget. So it’s a very different sort of optimization point. So trying to see where all those things will meet is still not clear, but we do see those general trends converging in some ways.

Datanami: Outside of the professional sphere, are there any fun facts about you that your colleagues may be surprised to learn?

I race cars. I’ll virtually race anything – I used to race bicycles years ago. Actually this year I’ll be racing a professional series around North America, called the Ferrari Challenge series. Basically it has six races around North America and then a finals race in Italy in October. It’s a fully supported race by Ferrari, It’s actually televised with other races, like Formula One will be at the same race in Montreal, for instance, so I guess you could say it’s a very serious hobby, like my second job.

More about Naveen Rao: 

Naveen’s fascination with computation in synthetic and neural systems began around age 9 when he began learning about circuits that store information along with some AI themes prevalent in sci-fi at the time. He went on to study electrical engineering and computer science at Duke, but continued to stay in touch with biology by modeling neuromorphic circuits as a senior project. After studying computer architecture at Stanford, Naveen spent the next 10 years designing novel processors at Sun Microsystems and Teragen as well as specialized chips for wireless DSP at Caly Networks, video content delivery at Kealia, Inc, and video compression at W&W Comms. Armed with intimate knowledge of synthetic computation systems, Naveen decided to get a PhD in Neuroscience to understand how biological systems do computation better. He studied neural computation and how it relates to neural prosthetics in the lab of John Donoghue at Brown. After a stint in finance doing algorithmic trading optimization at ITG, Naveen was part of the Qualcomm’s neuromorphic research group leading the effort on motor control and doing business development. Naveen was the founder and CEO of Nervana, which was acquired by Intel. Naveen strongly believes that it’s in Intel Nervana’s DNA to bring together engineering disciplines and neural computational paradigms to evolve the state-of-the-art and make machines smarter.

 

Shay Banon
Elastic
Justin Langseth
Zoomdata
Matt Mills
MapR
Todd Mostak
MapD
Neha Narkhede
Confluent
Travis Oliphant
Continuum
Naveen Rao
Intel
Ion Stoica
RISELab
Joe Witt
Hortonworks
Reynold Xin
Databricks
Jeff Dean
Google
Andy Jassy
AWS

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13

Datanami