DataTorrent
Language Flags

Translation Disclaimer

HPCwire Enterprise Tech HPCwire Japan
Leverage Big Data'14

February 07, 2013

Gridstore Files Down Rip and Replace


The traditional model of jettisoning your old system and migrating to a new one in order to scale with your data is no longer necessary, according to storage startup Gridstore. 

The company claims that its grid-based approach to software defined storage allows systems to scale as the data scales, eliminating the need for rip-and-replace migrations when a system hits the ceiling on storage capacity.   

In a recent technical overview, CEO, Kelly Murphy, claims that by virtualizing both the processors and the entire storage stack, and distributing them across a grid architecture, systems can achieve “unlimited” scalability in bandwidth, capacity, processing, and protection. According to Murphy, storage and processing capability are increased by adding new building blocks (nodes) of storage to a standard Ethernet Network. 

Murphy asserts that a chief benefit to this model is that the system becomes increasingly powerful in capacity and parallel network bandwidth as each additional storage node is added.  With every node pooled, each job is distributed across the network, lightening the load per node. As the pool expands, each node does less than what it did previously, resulting in jobs being accomplished in less time.  The nodes themselves become very simple devices, taking packets off the network and putting them on the disk.

Through Gridstore’s software defined storage model, users establish as many parallel virtualized controllers as are needed to eliminate processing bottlenecks common with a central controller.  Distributing the data across a virtualized pool of storage, the system is focused on providing an “unlimited” amount of parallel processing power that is balanced to the demands of the storage system – with the assumption being the unlimited ability to grow the storage and processing pool as your system requires.

Murphy explains that using this approach you can start with the capacity that you need today, and grow to any size you need in the future.  “Effectively, there is no endpoint on your storage system anymore – if you need to double your capacity tomorrow, now you can,” says Murphy.

So what happens if a storage node fails?  Murphy says that with the Gridstore solution comes GridProtect, – a process in which the virtual controllers write data encoded in such a way that if you lose any number of storage nodes, the data can still be put back together without loss or disruption.  Murphy claims that GridProtect goes way beyond the capabilities of RAID, protecting against any combination of network failures, node failures, disk failures, and even silent bit rot.

Theoretically, the result is a system that scales as needed as a system grows as the data scales through storage nodes, eliminating the need to migrate to a new system once the current one reaches capacity.

Related Articles:

ScaleBase, 451 Question Database Rip and Replace

Objectifying Big Data Storage

Versant Throws Magic Cube at Big Data

Share Options


Subscribe

» Subscribe to our weekly e-newsletter


Discussion

There are 0 discussion items posted.

 

Most Read Features

Most Read News

Most Read This Just In



Sponsored Whitepapers

Planning Your Dashboard Project

02/01/2014 | iDashboards

Achieve your dashboard initiative goals by paving a path for success. A strategic plan helps you focus on the right key performance indicators and ensures your dashboards are effective. Learn how your organization can excel by planning out your dashboard project with our proven step-by-step process. This informational whitepaper will outline the benefits of well-thought dashboards, simplify the dashboard planning process, help avoid implementation challenges, and assist in a establishing a post deployment strategy.

Download this Whitepaper...

Slicing the Big Data Analytics Stack

11/26/2013 | HP, Mellanox, Revolution Analytics, SAS, Teradata

This special report provides an in-depth view into a series of technical tools and capabilities that are powering the next generation of big data analytics. Used properly, these tools provide increased insight, the possibility for new discoveries, and the ability to make quantitative decisions based on actual operational intelligence.

Download this Whitepaper...

View the White Paper Library

Sponsored Multimedia

Webinar: Powering Research with Knowledge Discovery & Data Mining (KDD)

Watch this webinar and learn how to develop “future-proof” advanced computing/storage technology solutions to easily manage large, shared compute resources and very large volumes of data. Focus on the research and the application results, not system and data management.

View Multimedia

Video: Using Eureqa to Uncover Mathematical Patterns Hidden in Your Data

Eureqa is like having an army of scientists working to unravel the fundamental equations hidden deep within your data. Eureqa’s algorithms identify what’s important and what’s not, enabling you to model, predict, and optimize what you care about like never before. Watch the video and learn how Eureqa can help you discover the hidden equations in your data.

View Multimedia

More Multimedia

NVIDIA

Job Bank

Datanami Conferences Ad

Featured Events

May 5-11, 2014
Big Data Week Atlanta
Atlanta, GA
United States

May 29-30, 2014
StampedeCon
St. Louis, MO
United States

June 10-12, 2014
Big Data Expo
New York, NY
United States

June 18-18, 2014
Women in Advanced Computing Summit (WiAC ’14)
Philadelphia, PA
United States

June 22-26, 2014
ISC'14
Leipzig
Germany

» View/Search Events

» Post an Event