Follow Datanami:
November 5, 2013

IBM Takes BLU In-Memory Database to the Cloud

Alex Woodie

IBM this week rolled out a cloud-based version of its BLU in-memory database that’s designed to provide users with a low-cost options for data warehousing and BI. The company also unveiled a host of other big data analytics and cognitive computing offerings–and made some big Hadoop performance claims–at its annual Information OnDemand shindig in Las Vegas.

The BLU Acceleration technology, if you’re not familiar, is a tweak of IBM’s DB2 for LUW database that adds additional column-oriented tables designed to speed reads (such as SQL queries) by up to a factor of 25. The software also includes intelligent data compression algorithms that allow it to read data even when it’s compressed, and a memory paging architecture that gives it the flexibility to write data to disk if needed.

BLU has a bunch of other fancy bits in it, like parallel vector processing, core-friendly parallelism, scan-friendly caching, and data skipping. Suffice it to say, Big Blue’s geniuses have done the hard work with the aim of making it very easy for users and applications to get at large amounts of data in a quick manner. The two-step process of preparing a BLU database–create your table, then load your data–is ridiculously simple compared to the standard relational database prep work of creating schema, partitions, and indexes and then optimizing and tuning the parts to work as needed.

BLU Acceleration for Cloud combines the core BLU in-memory technology along with Cognos BI tools. This enables users to query, visualize, and slice and dice big data sets stored in the cloud. IBM says a user can get up and running with its BLU Acceleration for Cloud offering in less than an hour, at a cost of “less than a cup of coffee.” It’s currently only available as a technology preview. To apply for access, check out bluforcloud.com.

IBM says the BLU Acceleration for Cloud can be used by either regular business users or IT folks. But its new SmartCloud Analytics–Predictive Insights offering is geared specifically toward helping IT departments use big data technologies.

According to IBM, SmartCloud Analytics–Predictive Insights uses “cognitive” capabilities that allow it to read millions of log files per day and update configuration settings on the fly in response to changing business conditions. It’s a curious name for a service, especially considering that IBM announced the same day that it’s killing the SmartCloud in favor of SoftLayer. In its announcement, IBM said that SmartCloud Analytics–Predictive Insights will, in fact, run on the SoftLayer cloud, not the SmartCloud cloud.

Also trickling out of Big Blue’s big data product pipeline is SmartCloud Virtual Storage Center (which, again, won’t live on the SmartCloud but the SoftLayer cloud instead). This new offering is aimed at automating the storage tiering decisions in a cloud environment. Instead of requiring human minds to decide what types of data should be stored onsite versus on the cloud, the SCVSC does it for them automatically by “learning” data usage patterns over time.

Several other big data products came out of the show, including InfoSphere Data Explorer version 9, the latest release of IBM’s search-based data exploration tool. IBM says the new version allows users to access data no matter where it’s residing, including on-premise systems or the cloud. It also includes the BigIndex for scaling deployments across a cluster, a new application framework, and new changed data capture (CDC) capabilities.

Big Blue also made a big performance claim for its Hadoop distribution, called InfoSphere BigInsights, running on its PureData all-in-one converged system. “In an audited benchmark,” IBM says, “InfoSphere BigInsights for Hadoop has been found to deliver an approximate 4X performance advantage on average over open source Hadoop.”  The benchmark was conducted by the Securities Technology Analysis Center, a Warrenville, Illinois, company that does big data performance testing for a variety of companies.

On the Hadoop and NoSQL fronts, IBM bolstered its InfoSphere Data Privacy offering to enable users to monitor how sensitive data is accessed from these systems, and also gives them the option to mask the data to improve security. It also unveiled a new release of its Information Governance Dashboard, which displays confidence levels, and announced a new, smaller configuration for IBM PureData System for Transactions for continuously available transactional databases.

While IBM may lack the panache (and the intuitive product naming skills) of smaller and nimbler big data startups, it’s nevertheless a giant when it comes to big data. Thanks to its sheer breadth and depth of products and services, IBM is consistently ranked as one of the bigger players in the big data space. In fact, Wikibon analyst Jeff Kelly ranked IBM as the biggest of the big data vendors, with $1.3 billion in big-data related revenue this year.

Related Items:

The Big Data Market By the Numbers

IBM Ships Hadoop Appliance for the Big-Skills Challenged

IBM Announces “BLU Acceleration” and PureData System for Hadoop

 

Datanami