Follow BigDATAwire:

February 17, 2014

In-Memory Computing Is the Key to Real-Time Analytics

Dr. William L. Bain, CEO, ScaleOut Software, Inc.

Real-time analytics offers enterprises the ability to examine “live,” fast-changing data within operational systems and obtain feedback in milliseconds to seconds. For example, a hedge fund in a financial services organization can track the effect of market fluctuations on its portfolios (“strategies”) of long and short equity positions in various market areas (high tech, real estate, etc.) and immediately identify strategies requiring rebalancing. An e-commerce company can reconcile orders and inventory in real time to avoid a shortfall in inventory and ensure that orders are accurately filled.

Use Data-Parallel Computing to Avoid Data Motion

The key to real-time performance, especially for growing workloads, is to use memory-based, data-parallel computing. Building on fundamentally the same parallel computing architecture as supercomputers used in scientific applications, in-memory data grids (IMDGs) run on a clustered set of servers to hold and analyze memory-based data. IMDGs keep access times constant, which is exactly the characteristic needed by applications which have to handle growing workloads. More significantly, some IMDGs can host data-parallel applications to update and analyze data stored on the grid’s servers. This is the key to their ability to perform real-time analytics.

The performance benefits of the data-parallel approach are dramatic. To illustrate this, take a look at some performance measurements for a risk analysis computation in financial services modeled using a technique called “back testing.” This analysis compares a variety of stock trading algorithms using recorded price histories for a collection of equities. Each price history was stored in a single object within the IMDG, and the servers were assigned equities to analyze. (Note that the IMDG’s in-memory storage also could dynamically update the price histories from a ticker feed to enable real-time feedback to a trading system.)

The following chart compares the conventional “task-parallel” technique in which the servers analyze a random set of equities to the data-parallel technique in which the servers only examine equities stored on the same server.  Note how the data-parallel approach (red line) maintains linear performance scaling as the workload increases and IMDG servers are added to the cluster. In contrast, the task-parallel approach (blue line) fails to achieve performance scaling due to accessing objects from remote servers which creates substantial networking overhead.

 

By avoiding data motion, the data-parallel approach delivers much higher performance. All data is analyzed in place without the need to send it over to network to another server for analysis. IMDGs which perform data-parallel analysis can take full advantage of this linear speedup to deliver results with the lowest possible latency. This enables them to run real-time analytics on fast-changing data held in the IMDG, and they combine the IMDG’s in-memory storage with scalable computation to implement complex applications.

An Example in Financial Services

Consider the example of a hedge fund tracking its trading strategies for the market sectors it tracks. The data for these strategies can be stored within an IMDG as a collection of objects, each of which represents a market sector, such as high tech or real estate, and holds the equity positions and rules for that market sector. Because the IMDG automatically distributes the objects within a collection across all grid servers, it ensures that data-parallel analysis will be load-balanced across the cluster.

The IMDG continuously runs a data-parallel computation that both updates each strategy object with a snapshot of market price changes from an incoming market feed and evaluates the strategy to determine if stock trades are needed. By performing this analysis in parallel across all strategies, the IMDG generates results in milliseconds instead of several minutes needed by conventional disk-based, sequential analysis. No data motion is needed to perform the data-parallel analysis, and maximum performance is achieved.

The following diagram illustrates how the IMDG hosts a set of strategies and performs this analysis while updating them with a live market feed containing snapshots of price changes. The analysis produces a stream of alerts to the trader (or to an automated trading system) for strategies that need rebalancing. The diagram shows the data parallel analysis being performed by a technique called parallel method invocation (PMI), which executes the analysis code in parallel on all objects and then globally combines the results for delivery to the trader:

The net effect is that the hedge fund now can update its strategies and obtain alerts in real time to rebalance its portfolios based on current market conditions. A proof of concept implementation using 2K strategies and tracking a total of 40K positions on a cluster of four servers delivered alerts within about 330 milliseconds. This was measured to be more than 40X faster than running this analysis on the Apache Hadoop platform and shows the power of IMDGs to perform real-time analytics.

Summing Up

In-memory data grids offer a powerful yet easy to use platform for hosting fast-changing, in-memory data and running highly scalable, data-parallel computations. This allows IMDGs to be seamlessly integrated into operational systems and perform real-time analytics on “live” data, opening up many new opportunities to add value to these systems.

BigDATAwire