Follow Datanami:
February 15, 2012

Hollywood Sharpens Focus on Storage

Robert Gelber

Movie studios have come a long way since silent films, but their advancements in technology have introduced big challenges relating to data storage and accessibility.

Data storage walls are nothing new to the entertainment industry; a referral to any feature length film created by Pixar, for example, will reveal large storage requirements. The problem now is that newer technologies have a heavy dependence on storage.

An uncompressed 1080p recording can consume roughly 1 terabyte/hour, but recording an uncompressed 4k film (4096 X 2160) for 1 hour can require nearly 3.5 terabytes (more than most single hard drives can handle). The current consumer style approach to data storage and retrieval could be cumbersome with this amount of data and unproductive at worst.

In the course of its pitch to provide applications to growing data needs and possible new revenue streams for the media and entertainment industry, storage vendor Amplidata has announced a panel discussion along with members of Warner Brothers entertainment (and other guests yet to be named) at the Creatasphere Digital Access Management conference in Los Angeles.

Topics on tap will include the growth of storage requirements from terabytes into petabytes, current industry challenges in regards to data and possible advancements in new operations of storage and retrieval such as object oriented storage.

Amplidata makes claim that their product is thousands times more reliable than RAID technologies and promise 50 – 70 percent reductions in power requirements and a 90 percent savings on cost of ownership. Just one of those claims is worth raising an eyebrow over.

According to the company, an object is stored as an array of “equations” as checkblocks computed by their BitSpread erasure coding software (the Codec). In principle this seems to function similarly to HFS (Hadoop’s) file system but with a key twist: no full copy of a file is stored on a single device, just a massive series of checkblocks with greatly reduced overhead than Hadoop’s triple copy approach.

This functionality basically means that the amount of checkblocks currently stored creates such a level of durability that if a failure were to occur, data is still fully accessible and protected with full integrity. It would result in many parallel agents to “pick up some slack” and automatically regenerate additional new checkblocks without user intervention . The same would work in the sense of data retrieval. All the required bits requested would be sent from the cluster to the decoder, which recomputes the data essentially by “solving” a subset of these equations. This approach makes data highly-durable and eliminates the exposures to long rebuild times suffered by RAID systems, especially on large (multi-Terabyte) disk drives.

Keeping with the company’s entertainment focus, last October Amplidata announced that it was building an archive for the venerable Montreaux Jazz Festival to store their 10,000 plus hours of musical performances.

Based out of Belgium and Redwood City, Amplidata was founded in 2008. The founding trio each experienced success through sales of their companies to Symantec, Terremark and SUN Microsystems. The company was part of a group of companies called Incubaid that focused on advancements in data center technologies.

Related Stories

Live from SC11: The Storage Cost of Rendering

Big Data and The SSD Mystique

Fusion-io Flashes the Future of Storage

Datanami