Follow Datanami:
October 24, 2011

Oracle Extols Exadata Evolution

Datanami Staff

This week, Oracle’s VP of Product Management, Data Management, George Lumpkin talked in depth about how his company views the architectural and practical challenges of big data for enterprise customers, as well as the role of their Exadata platform in managing the data onslaught.

There are a number of technical hurdles in the way of developing big data platforms that are big data ready, Lumpkin says. He points to some of the stickiest challenges like scaling, contending with a range of data formats and sources, the rate at which data flows in, as well as more performance-related problems, including latency. Even in the face of these, he says that the company’s Exadata platform is rising to meet such demands.

In addition to the challenges of contending with increasing volumes of semi-structured or unstructured data, the other challenge lies in the limitations of analytics. He says that companies want to be able to exact deep analysis on data, but do it at the terabyte or petabyte scale. He claims that there are some capabilities to work with graph problems and deep analytics on large datasets in their database platform, “and half of them have been used in a widespread manner in other types of predictive analytics.” He says that without highly parallelized infrastructure, however, these problems will not scale to meet the demands of mega-data volumes.

According to Lumpkin, database platform and data warehousing development have been priorities at Oracle, all of which are feeding into the company’s Exadata platform. He said that among the newer features that make Exadata ready for enterprise data demands are the incorporation of flash-based storage in addition to the standard disk offering. He also says that “The Exadata platform delivers database processing in the storage tier, providing a whole new processing tier for doing database optimizations at the storage level and also at the server level, where database processing has always occurred in the past.”

As an addition to the storage-end developments and general performance, Lumpkin says that Oracle uses columnar storage capabilities, which means instead of storing data across rows, these are instead on disk. He claims Oracle is working on in-memory database optimizations and that the InfiniBand backbone will allow efficient, fast networks.

Lumpkin claims that the “big data” trend reflects nearly every other movement in IT; enterprises continue to want more data, speed, insight and sophistication. He says that far beyond sheer data size, the big data trend is certainly not just about size, more importantly, it’s about a fundamental shift in how businesses want to make use of data—and how they expect to change their infrastructure to tap the new influx of analytics and data capabilities.

Lumpkin also says that there are a few missing elements in the big data puzzle, at least from the enterprise perspective. First, there has been a movement beyond structured data that don’t fit in the traditional row and column format. He points to examples of semi-structured data that comes from machine-generated sources and sensors like RFID and smart meters, saying that not only do these sources create data at an unprecedented pace—but that enterprises are scrambling to find ways to fit these data into their existing architectures.

For more, check out Ron Powell’s in-depth interview with the Oracle VP of Product Management here.

Datanami