Follow Datanami:
April 19, 2013

Facing the Challenges of a Parallel Future

Isaac Lopez

Big data requires big processing, but can our current pace keep up with the processing demands of the future? Perhaps not if we continue the status quo, suggested Andreas Olofsson, CEO of chipmaker, Adapteva, in a keynote at the Linux Foundation Collaboration Summit in San Francisco this week.

“It’s a fact that we have a huge problem right now in terms of energy efficiency,” cautioned Olofsson in discussing the challenges the computing industry faces. “We’re going along and we have Moore’s law doubling compute every two years, and energy efficiency hasn’t kept up.

Compounding the problem, suggests Olofsson, is that people aren’t feeling the pain right now and thus aren’t concerned. “They’re going along in their business – coding is hard, and so you figure push that off until another day, but the problem is getting worse and worse. Looking out to 2020, 2025, what are we going to do?”

It’s not just a power issue that the industry faces, says Olofsson. Outside of power consumption there are a host of other things that will drive the future of computing, including memory bottleneck, wiring, thermal density, latency walls, yield issues Amdahl’s law, and others. Despite all these challenges, Olofsson says that we can take inspiration from one of the most advance processing machines available in nature today: the human brain.

“The brain,” marveled Olofsson. “It’s parallel. It’s low power – 30 watts with billions of neurons. It’s heterogeneous, which means that different parts of the brain are specialized to do different functions. And it’s robust – if you lose a small part of your brain, the brain doesn’t shut down, which is different from most computers today – if you get one transistor in the wrong place, you’re done. A one bit error in the wrong place, you’re done – you get a crash. That’s kind of ridiculous going forward.”

Heterogeneous computing is the practical vision for today, says Olofsson, admonishing the audience that it should start thinking toward using the tools that are available today to build multi-faceted systems that are more efficient.

But even with this prescription, Olofsson notes that parallel programming is a large challenge that the industry still has to solve. “Today, parallel programming is quite fringe,” he noted. “Very few people do it. I know some incredibly talented programmers who have programmed single thread type programming for 10 to 20 years and parallel programming and debugging for them is hard. So imagine a freshman in college, and you tell him ‘look, if you want good performance you have to be a parallel programmer.’ What is he going to do if a guy with 10 to 20 years of experience is having a hard time?”

Still, Olofsson says the challenge of today is making parallel programming as productive as Java or Python is today, suggesting that at this juncture, there really is no choice in the matter.

“If you look at the bigger scope of things, there is no question at all that the future of computing is parallel. How else are we going to scale? We’re on such an exponential curve in terms of performance, and we’re running out of tricks. Where are we going to get the next million X speed-up from? It can only come from parallel. And if that’s going to be in 2020, 2050, or 2100, why wait? We can go parallel right now, but it’s going to hurt before we get there.”

With that, Olofsson discussed his company’s own parallel computing project, Parallella which aspires to be a $99 dollar parallel computing platform. The project, which gained fame last fall as a Kickstarter rockstar, gained nearly 5,000 backers for a funding total approaching $900k. Olofsson says that the company turned to Kickstarter after they taped out a 64 core, 28 nanometer chip that burns 2 watts at 100 gigaflops and heard crickets when they were pitching it.

“We’ve been trying to sell parallel computing for five years, and the market wasn’t ready for it, so this is our attempt at creating a market and doing something good at the same time,” commented Olofsson, who explained that they have chosen to take their idea to the open source community – a decision he said has been scary for a hardware guy, but ultimately more rewarding.

The goal, says Olofsson, is to create at 5 watt computer that runs 100 gigaflops – and the early results are in. Olofsson reported that they have received the manufactured boards earlier this month, and while they did have to break out the soldering iron for some minor tweaks, they’ve gotten far down their list of QA checkmarks towards their goal of shipping.

“Hopefully soon, we’ll have something that can actually run Linux that we can us to show some really attractive demos, but the hardware is looking great.”

Olofsson says that the company plans to use these to seed a movement towards the broad adoption of parallelism. Boasting that they have some of the smartest programmers on the planet excited about their project, Olofsson says that the company will also be donating 100 of their machines to universities, hoping to inspire programmers at that level to start thinking of the possibilities.  

From here, Olofsson says that they’re working on building a sustainable distribution model, and beginning work on massive parallelism.

“The architecture we have scales very well to large array sizes. We could actually put 1,000 cores on a chip tomorrow if somebody wanted it. We’ll keep working on that, but the really good news is that we have boards working.”

Related Items:

Adapteva Launches Crowd-Source Funding for Its Floating Point Accelerator 

Podcast: Petabyte in a Flash; Coprocessors Reach a New Epiphany 

High Performance Big Data Use Cases 

Datanami