Follow Datanami:
January 28, 2013

CTO Sees Virtualized Big Data as Next Challenge

Nicole Hemsoth

It’s no secret that the big data phenomenon is reshaping overall approaches to enterprise IT, but it’s just one part of a grander restructuring that has been led by other trends, including wider-scale adoption of public cloud resources and virtualization.

Not long ago, virtualization buff and Silver Peak CTO, David Hughes argued that the convergence of trends like virtualization and increasing complex data volumes that need to be moved quickly mark another shift for IT leaders.

As Hughes told Datanami, the biggest changes have to do with the networking department and moving big data over distance. With virtualization, we will find more storage and application owners wanting to solve the data mobility challenge without involving the networking department.  Things like software-defined acceleration are making it possible for storage and server administrators to move and accelerate their workloads with point-and-click simplicity…all from the virtualization management console.

We discussed these issues in greater detail with the virtualization CTO in the interview below, which is capped by a video at the bottom of this article that points to more of the virtualization shift for overall IT organizations.

Datanami: When it comes to conversations about “big data” what is it about software-defined acceleration that is overlooked?

Hughes: When it comes to “big data” and software-defined acceleration, it’s less about what’s being overlooked and more about that fact that it hasn’t been done yet.  As data volumes increase and organizations begin pulling in data from multiple sources across a wide area network (WAN), they must contend with the adverse effects of distance, network quality and capacity, all of which can slow down the transfer and accessibility of that data.  Software defined acceleration provides a more simple and easily accessible model for moving larger volumes of data more quickly over longer distances.

As well as having to move around vast amounts of big data, it is also crucial that this data be protected and kept secure, both for regulatory and compliance reasons, and simply to maintain customer trust. If you look at recent natural disasters, including Hurricane Sandy or the Japan Tsunami and resulting Fukushima Daiichi nuclear disaster, we find that it is no longer sufficient to replicate data across town or even in the same state.  You can no longer replicate from New York to New Jersey, you need to replicate data over greater distance. 

IDC estimates that 35% of the information in the digital universe needs protection, but that only 19 percent actually is protected. A recent study conducted by Forrester Consulting on behalf of Silver Peak also found that a large majority, 72%, agree or strongly agree that they would like to replicate more of their application data than they do currently, and 62% would like to replicate more frequently. Big Data is more than a storage and server challenge, it is a challenge for the network, and software defined acceleration stands to play a critical role in big data in the future.

Datanami: What seem to be the hottest industries for what you and your competitors provide and what is it about their workloads that is optimal?

Hughes: The industry for data acceleration over distance is fairly horizontal.  As data volumes increase, disaster recovery requirements become more ubiquitous and applications move to the cloud, IT professionals across a variety of vertical markets need to move more data quickly over longer distances.  Silver Peak data acceleration software overcomes the distance, quality and capacity challenges that are inherent in today’s wide area networks. 

Industries where we do see larger volumes of data and the need for higher-performance, higher-capacity technologies are high-tech and oil/gas.  With high-tech in particular, if you look at the Googles, Amazons and Facebooks of the world, they are dealing with lots and lots of data being transmitted on a global scale.  But it’s more than just providing accessibility to that data on a global scale, it’s also about protecting that data to ensure the availability and sustainability of their business.  These requirements place a huge dependency on the wide area network.

Datanami: What is the next-generation of data movement and management challenges we’ll face in 2013?

Hughes: The next generation of data movement and management challenges will be focused on data replication and the movement of data over greater distances.  As data volumes grow and data replication requirements increase, more strain is being placed on a wide area networking infrastructure. 

A lot of people assume that because bandwidth is getting cheaper and bandwidth rates are going up, that the WAN bandwidth bottleneck is going away or will go away.  What’s interesting is that the growth of data continues to exceed the rate of which new services, new technologies and bandwidth upgrades are being deployed within carrier networks.  The increase in traffic, whether it be the amount of storage data, the uptake of replication data, or the overall growth of traffic on the Internet, these far outpace the level of innovation and the price drops your seeing in enterprise WAN services.  This translates into there being a worse WAN bottleneck today then there was 10 years ago.

Datanami