Follow Datanami:
November 29, 2013

Big Data Taxonomy: The Value in Big Data

Scott Pearson

Even considering the Vs of Volume, Velocity, Variety, and Veracity, understanding Big Data can be vexing and vague.  One cannot attend a technology or business event these days with being buried in Big Data buzz.  The very term Big Data is ubiquitous and pervasive, yet unclear and misunderstood. It can be construed as merely hype or a marketing trend, or more often these days, interpreted to scare consumers with privacy concerns.  However, the most objective definition of Big Data is “a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing application.”

Yet the operative V-word for Big Data is Value. 

Or, to quote my wife, “What good is Big Data unless it makes you money?”  The truth is, Big Data is a disruptive agent of change impacting not only my marriage but also technology, society, and business. The much debated marketing trend (or even hype) is actually a crucial and integral phase of Big Data’s transition from a nebulous niche to mainstream enterprise and economic viability. The Marketing Trend role in the innovation cycle (or process of innovation) in a company provides a common language for people to define a problem and a path for finding solutions.  Of course Silicon Valley is a branding machine, as the entire world knows of Google, Facebook, and Twitter. Yet in many instances upstart companies may have innovative and compelling technologies to address Big Data needs, but cannot communicate the value to clients about how to actually make the transformation to a better technology to create sustainable economic models and business endeavors.

To discover the value in Big Data, one must first understand the primary business environment where Big Data currently resides.  At the foundation of the Big Data architecture are open source software offerings like HDFS (Hadoop), Openstack, and NoSQL, which in most cases are venture capital funded, as are myriad other data analytics start-ups. In addition, many compute technologies originating in High Performance Computing (HPC) have been historically situated in the scientific and academic community, which are funded by public entities in the name of research and national security.  Furthermore, the commoditization of servers and networking hardware creates further complications as evidenced by the Open Compute Project vis-à-vis Facebook.  While this vast landscape is certainly exciting, no one yet knows who the true winners and losers will be in terms of generating revenue and a profitable business model.

Traditionally enterprise computing was a back-office function to support departmental operations and IT.  That back-office function is quickly transitioning off-site to Cloud Service Providers (CSPs).  Data analytics and business intelligence–coupled with the emergence of mobility, social media, virtualization, and cloud–means that enterprise computing is now critical to Global 2000 companies’ core business models and product offerings.  “We’re witnessing the rise of the Big Data driven enterprise,” said Mark Schreiber, Cloudwick General Manager. “Leading Fortune 100 companies are investing heavily to reengineer their people, processes, and technology around Big Data to gain competitive advantage”

As Intel states, “to compete, you must compute.”  For example, consider how Wall Street now utilizes computing from econometrics to predict behavior analysis. Or how the retail sector uses trend analysis to create personalized marketing.  In any enterprise, Big Data is about bringing topline value from data. It is not just Hadoop, but new storage impacting the data warehouse with innovative applications and technologies to compute, transport, manage, store, analyze, and utilize data. This is creating a dynamic technology and economic eco-system which include telecommunications companies, datacenters, networking via Software Defined Networking (SDN), applications, virtualization, and CSPs.

With Big Data and a convergence of HPC and virtualization, CSPs will become the new OEMs of enterprise computing, says Ben Woo, Managing Director of Neuralytix,a global IT market research and consulting firm. CSPs will be delivering Data-as-a-Service, Information-as-a-Service as well as hosting Big Data-as-a-Service. These OEMs will include companies such as Amazon Web Services, Verizon, AT&T, Google, Rackspace, InfoChimps, and others providing technologies, services, and products.

The challenge with this emerging Big Data technology economy is identifying who will be the winners and losers, and with whom to partner. This will entail not only technical adeptness, but also marketplace ingenuity.  Merely promoting products and features sold by old-school sales practices simply will not suffice.  Adapting to and understating your value in the ecosystem and creating alliances and partnerships wisely are essential.  I have created and utilized an inventive sales model called “Encirclement” to reach and support key partners in the new economy.  Because ultimately, as in anything, capturing revenue is what will produce the winners in this Big Data game, and strategic partnering creates true value for all concerned. 

 

Scott Pearson is currently a CLDS Industry Fellow at the San Diego Supercomputing Center, and is formerly the Director of Big Data Solutions at Brocade. He has been involved in the open source community for over 15 years. Before Brocade, Pearson served as a Director of Federal Sales at Linux Networks where, in collaboration with Sandia National Laboratories, he delivered the first ever Infiniband cluster, along with several other significant Top500 HPC Systems.

 

Datanami