Follow Datanami:

Tag: shared memory

Big Data – Scale Up or Scale Out or Both

The “Big Data” term is generally used to describe datasets that are too large or complex to be analyzed with standard database management systems. When a dataset is considered to be a “Big Data” is a moving target, since the amount of data created each year grows, as do the tools (soft-ware) and hardware (speed and capacity) to make sense of the information. Many use the terms volume (amount of data), velocity (speed of data in and out) and variety of data to describe “Big Data”. Read more…

Bar Set for Data-Intensive Supercomputing

This week the San Diego Supercomputer Center introduced the flash-based, shared memory Gordon supercomputer. Built by Appro and sporting capabilities at the 36 million IOPS range, the center's director made no mistake in stating that a new era of data-intensive science has begun. Read more…

Putting Shared Memory to the Test

A new pilot project in Europe seeks to show the value of shared memory systems (this time from an IBM, Numascale and Gridcore partnership) as national goals point to the need to create massive systems for massive data. Read more…

Interview: Cray CEO Sees Big Future in Big Data

During the this year's annual Supercomputing Conference (SC11) in Seattle, Cray's home turf, we caught up with the company's CEO, Peter Ungaro to talk about the increasing meld between big data and traditional supercomputing--and what this blend could portend for Cray going forward. Read more…

Datanami