Follow Datanami:
November 19, 2013

Micron Aims at Big Data With New Parallel Processing Architecture

Isaac Lopez

In 1977, IBM researcher John Backus wrote a seminal paper that asked the question: “Can programming be liberated from the von Neumann style?” This week, Micron Technology has announced a new computing architecture called the Automata Processor, which it says answers the Backus question with a definitive “yes!”

The new chip is Micron’s play to wrestle relevance away from Intel, which has largely run away with the processor industry. The insurgent semiconductor company announced the development of its new Automata Processor (AP) with an architecture aimed directly at parallel processing, set to capitalize on the rise of the big data workload.

Announced this week leading into the Supercomputing 2013 (SC13) conference in Denver, Colorado, Micron says that Automata is “an accelerator that leverages intrinsic structural parallelism of computer memory… to provide a massive array of ‘automata’ elements, allowing massively processing parallelism with improved energy efficiency.”

According to Micron, Automata is unlike conventional CPUs in that it is “a scalable, two-dimensional computing fabric comprised of tens of thousands to millions of processing elements interconnected to create a task-specific processing engine capable of solving problems with unprecedented performance.” The company says that where conventional parallelism consists of a single instruction applied to many chunks of data, Automata focuses a vast number of instructions at a targeted problem, thus optimizing performance.

The company says that the new paradigm aims at workloads that involve large data structures, unstructured data, random access, and real-time data analysis – your basic big data fare, where in-memory computing has become a rising phenomenon. Micron says that while Automata is not a memory device, it is memory based, saying that it leverages the intrinsic parallelism of DRAM to answer questions about data as it is streamed across the chip.

The new paradigm flies in the face of the traditional von Neumann architecture, in which instructions are analyzed sequentially, data is crunched, and then the next set of instructions are taken up for crunching. This paradigm has given rise to the concept of the von Neumann Bottleneck, which can (admittedly imperfectly) be visualized as a toll road, where only so many cars (instructions) can get through the checkpoints (processors) at any given time. Throughput is limited by the number of processing stations that are available, regardless of how fast the speed limit is.

While there have been many workarounds created over the years to the von Neumann problem, including caching, prefetching, multi-threading, and more, these methods add complexity that can be difficult (and even expensive) to scale in an era where data is ballooning to such high levels. Micron hopes to change all that.

“What we have done with the Automata processor is made a processing device that is essentially a zero instruction processing device,” Paul Dlugosch, Micron’s director of Automata processing technology, told Datanami in an interview this week. “Rather than programmers having to worry about how to sequence instructions, how to manage large chunks of data, we give them the tools to program the automata processor fabric – tens of thousands of tiny processing elements – [and] we give them the tools to harness that power and connect those elements in a way that exactly solves their problem.”

“No longer are they concerned about high level languages,” he continued. “No longer are they concerned about computer assembly language instructions. Those concepts do not exist in the Automata processor. It’s liberating for computer scientists to be able to deal directly with the data structures that… they create in this new programming paradigm.”

If it works as they say, the whole thing could be rather disruptive. Should Intel be worried? It’s hard to say, but one of the bellwether indicators that we’ll be interested to see will be how Facebook interacts with this new technology through its Open Compute Project (we recently covered the opening of Facebook’s cold storage facility, based on its Open Compute standards). Earlier this year, Facebook’s Open Compute Project launched a server specification based on Calxeda-based ARM processors previously used for smart phones, which showed that the Facebook computing megalith is willing to displace Intel chips if it can get productivity or power-saving gains, even with (what might be thought of as) unconventional technology. If Open Compute starts to incorporate Automata into their designs, it could be a sign that Micron is off to the races.

In the meantime, Micron has indicated that it’s aiming its push at such areas as bioinformatics, video/image analytics, and network security, which it sees as low hanging competitive fruit due to the challenge that the workloads in these fields pose for conventional processor architectures.

Micron says it has cut the first silicon for the Automata processor and has prototypes in-house at the Boise, Idaho facility, with samples expected to be available in 2014. The company says that it will have a software development kit available soon, giving programmers the opportunity to design, compile, test, and deploy their applications with Automata.

Datanami