Follow Datanami:
April 26, 2016

Redis, Samsung Team on In-Memory Boost

Seeking a “generational change” in the economics of in-memory computing, database specialist Redis Labs Inc. has collaborated with South Korean memory chip powerhouse Samsung Electronics to accelerate the processing and analysis of bulging datasets using next-generation memory technology designed to significantly cut memory costs.

Redis Labs, Mountain View, Calif., said this week its flash-based platform running on standard x86 servers is available now as part of the company’s enterprise cluster. Along with standard SSD instances available on public clouds, a souped-up version that integrates SSDs based on NVMe (Non-Volatile Memory Express) such as Samsung’s PM1725 drive also was unveiled.

The upgrades address limits in operational processing and analysis of very large datasets where in-memory processing is often constrained by the cost of DRAM. Redis claims the combination of flash and DRAM promises to reduce latency while boosting throughput and reducing datacenter operating costs.

Meanwhile, Samsung’s (KRX: 005930) NVMe persistent memory technology is touted as accelerating processing and analytics performance without the additional costs associated with standard flash memory. Redis said it joined forces with Samsung to demonstrate 2 million ops/second throughput with sub-millisecond latency along with 1 Gb of disk bandwidth running on a standard Dell Xeon-based server. Eighty percent of the dataset was processed using the NVMe SSD technology while the remaining 20 percent was off-loaded to standard DRAMs.

The combination of Samsung’s next-generation NVMe-based drives and Redis flash-based processing is targeted at leveraging HPC performance for crunching extremely large datasets, the partners added. In benchmark tests, the partners claimed the Samsung drives delivered a 40-fold increase in throughput. At the same time, they said the cost of the new SSDs “is likely to be only incremental compared to [Serial Advanced Technology Attachment]-based SSDs.”

Among the use cases for advanced in-memory databases are on-demand social networks built on minimal latency architectures in which the transition from, for example, web page viewing to interactive forums must be nearly instantaneous. Redis Labs’ database customers typically handle between 400,000 and 1 million user requests a day. Those requests to and from third-party websites have to be handled at sub-millisecond latencies. As datasets explode, meeting those latency requirements with in-memory databases using DRAM alone has proven unrealistic. Hence, Redis Labs’ partnership with Samsung.

Along with standard x86 servers and SSD-backed cloud instances, the Redis flash configuration also runs on Power8 platforms. The company said it is also making the flash platform available to its cloud customers running on dedicated virtual private clouds.

The alliance with Samsung reflects ongoing attempts by the NoSQL leader to improve its in-memory processing performance. For example, it released a Spark connector package earlier this year that includes a library for writing to and reading from a Redis cluster. The connector package is intended to forge closer alignment between Spark and Redis clusters, thereby reducing network overhead while improving processing performance.

Recent items

Redis Connector Aims to Boost Spark Performance

Arrow Aims to Defrag Big In-Memory Data