Follow Datanami:
November 8, 2011

IBM Investing Billions in the Big Data Frontier

Nicole Hemsoth

If you try to make a quick mental list of companies that have weathered the drastic societal, technical and economic changes of the last century, chances are, you would be able to tally them on one hand.

Out of any of those you were able to conjure in a moment’s time, outside of IBM, changes are also quite good that none had their roots in computing–and even those that latched onto computation-driven technology 50 or 60 years ago are few and far between.

According to John Kelly III, IBM’s Senior Vice President and Director of Research, this is because of his company’s emphasis on ongoing, aggressive research and development programs.

During Big Blue’s centennial “Frontiers in IT” colloquium, Kelly said his company is pushing another $6 billion into innovation, saying IBM will survive, prosper or fail based on its ability to meet bleeding-edge goals to power or create revolutionary technologies.

Kelly pointed to four distinct areas of technology that encapsulate IBM’s current and future research and development focus. Among these are big data and the associated deep analytics, learning or cognitive systems, the era of exascale, and nanotechnology.

While we will get to the last two on the list in a moment, it is worth noting that Kelly continually pointed to Watson as a shining example of what is possible in terms of deep analytics and, at the system level, the efficiency and performance required to make those types of analytical capabilities a more pervasive reality.  

The Watson connection is simple to make on the deep analytics and cognitive system level, but there are connections to the other two areas in terms of balancing performance and efficiency. For instance, Kelly says that we are rapidly migrating away from hard, leaky silicon semiconductors to a carbon-based future that marries performance and power-awareness. These technologies will in turn power exascale machines, which in turn will fuel the kinds of powerful parallel programs that will further feed into new innovations for analytics at unprecedented scale and level of detail.

In terms of data, Kelly pointed to the unprecedented growth of data sizes and stated that the real challenge isn’t the size, it’s the fact that this data is coming at us faster than before and there are demands to respond to it many times faster than we can now. He said that in order to intercept big, fast data, we need to be able to make the move from hours or minutes to milliseconds or microseconds.

Using Watson as an example, Kelly said that Watson needed to be able to hear the question as spoken naturally, process the meaning of that data to generate and answer and decide whether to answer or stay mum. This all happened in 2.5 seconds. The key here is that in order to do this, Watson had to pull exclusively from local memory and that going forward, this is how architectures will have to evolve to meet big data demands.

To further complicate the big data problem is the fact that the data that is coming in from an ever-growing array of devices and sensors is noisy and dirty. He said the key innovation area will revolve around finding ways to quickly clean and analyze the data and that these efforts will comprise an entirely new dimension in the big data problem.

Growing outward, deep analytical systems that mimic biological systems are an imminent focus for IBM. Kelly pointed to the desire to move from the era of programmable machines to the dawn of the age of cognitive ones. Outside of the massive parallel programming required to get to the cognitive level of computing, this also involves everything from making such systems incredibly efficient, which links to some of the carbon-based innovations that will replace silicon he discussed in his tangent about nanotechnology.

Speaking of the nanotech angle, Kelly pointed to some specific examples of how IBM is reshaping nanotechnology, pulling in examples of research and development projects like the DNA transistor that allows for on-the-spot personalized medicine. This “transitor” device, which the company is working on with drug company Roche, sends DNA strands directly into a “nanopore” so an electric sensor can read the genetic information. Another example at IBM is the ongoing development of anti-bacterial nanoparticles that can detect and destroy antibiotic-resistant bacteria (and are even biodegradable).

It’s hard not to sit back and admire any company that has been able to negotiate dramatic changes in technology and the way people interact with new innovations. If Kelly is correct and the secret of the sauce is in pure R&D, they will be one the few technically-geared American companies to weather past the 150 year mark.

Datanami