Follow Datanami:
October 18, 2012

This Week’s Big Data Big Five

Datanami Staff

We’re gearing up for an exciting week just around the bend as Strata HadoopWorld kicks off on Tuesday. We’ll be live on site feeding a wide range of interviews with nearly every company and thought leader you’ve seen featured in our weekly top five stories over the last year.

Some vendors are getting a leg up on the competition by announcing their updates and roll-outs ahead of the crowd, so let’s dive in with Hortonworks, which actually has two items on the list this week. We also touch on some recent analyst estimates about the ecosystem around big data and take a look inside the developments at an analytics startup we’re keeping a close eye on.

Panasas Partners with Hortonworks

Panasas, Inc. announced a technology partnership with Hortonworks.  As part of the agreement, Panasas will become a member of the Hortonworks Technology Partner Program, which was formed to accelerate the growth of a vibrant Apache Hadoop ecosystem through technology collaboration, technical support, joint testing and evangelism.

“Panasas is eager to work with Hortonworks to develop technologies that will allow the rapid adoption of Apache Hadoop in technical computing markets,” said Barbara Murphy, chief marketing officer at Panasas.

The two companies will work together to ensure compatibility between the Hortonworks Data Platform and Panasas parallel storage products, as well as on future technologies, to advance the ability to move data on and off dedicated Hadoop clusters to a storage appliance like Panasas ActiveStor.

“The rapid adoption of Hadoop in technical computing environments is driving the need for greater interoperability between various storage platforms, file systems and Hadoop applications,” said Mitch Ferguson, vice president of business development at Hortonworks.

Next- Gartner Reveals Big Data Spending in 2012 >


Big Data Worth $28 Billion of Spending in 2012

Big data will drive $28 billion of worldwide IT spending in 2012, according to Gartner, Inc. In 2013, big data is forecast to drive $34 billion of IT spending.

Most of the current spending is used in adapting traditional solutions to the big data demands — machine data, social data, widely varied data, unpredictable velocity, and so on — and only $4.3 billion in software sales will be driven directly by demands for new big data functionality in 2012.

Big data currently has the most significant impact in social network analysis and content analytics with 45 percent of new spending each year. In traditional IT supplier markets, application infrastructure and middleware is most affected (10 percent of new spending each year is influenced by big data in some way) when compared with storage software, database management system, data integration/quality, business intelligence or supply chain management (SCM).

“Despite the hype, big data is not a distinct, stand-alone market, it but represents an industrywide market force which must be addressed in products, practices and solution delivery,” said Mark Beyer, research vice president at Gartner. “In 2011, big data formed a new driver in almost every category of IT spending. However, through 2018, big data requirements will gradually evolve from differentiation to ‘table stakes’ in information management practices and technology.”

Big data opportunities emerged when several advances in different IT categories aligned in a short period at the end of the last decade, creating a dramatic increase in computing technology capacity. This new capacity, coupled with latent demands for analysis of “dark data,” social networks data and operational technology (or machine data), created an environment highly conducive to rapid innovation.

Starting near the end of 2015, Gartner expects leading organizations to begin to use their big data experience in an almost embedded form in their architectures and practices. Beginning in 2018, big data solutions will offer increasingly less of a distinct advantage over traditional solutions that have incorporated new features and functions to support greater agility when addressing volume, variety and velocity. However, the skills, practices and tools currently viewed as big data solutions will persist as leading organizations will have incorporated the design principles and acquired the skills necessary to address big data concerns as routine flexibility.

“Big data will once again become ‘just data’ by 2020 and architectural approaches,” said Beyer “infrastructure and hardware/software that does not adapt to this ‘new normal’ will be retired. Organizations resisting this change will suffer severe economic impacts.”

Next–Alpine Data Labs Develops Hadoop-Powered Predictive Analytics >


Alpine Data Labs Develops Hadoop-Powered Predictive Analytics

Alpine Data Labs announced the release of Alpine 2.8. With this advance, Alpine 2.8 users can perform analytics on combined data from Hadoop and relational databases from their web browser and with no additional investment in hardware or infrastructure.

Alpine makes the work of exploring and preparing data more iterative. Alpine also makes it as easy as opening a web browser to use. Besides making predictive analytics more accessible, Alpine increases collaboration, enabling everyone from data engineers to business analysts to share workflows and analyses.

“What’s truly exciting about this release is not just that we’ve figured out how to leverage a framework that until now has remained too difficult to master, but that we’re delivering it in a manner that turns a web browser into an analytics playground,” said Steven Hillion, chief product officer, Alpine Data Labs.

Alpine 2.8 gives users the power of end-to-end analytics without typing a single line of code. 

“What Alpine has done for big databases like Greenplum and Netezza, they’ve now done for Hadoop. They’ve put big data predictive analytics in the hands of business users, empowering them to uncover competitive insights without the need for specialized statistical and coding skills,” said Hugo Evans, Chief Information Officer for A.T. Kearney Procurement & Analytic Solutions. 

NEXT — SAP Blasts HANA at Cloud Scale >


SAP Blasts HANA at Cloud Scale

This week SAP rolled out enhancements to its HANA platform, including the introduction of the SAP HANA One platform, a deployment of SAP HANA certified for production use on the Amazon Web Services (AWS) Cloud and immediately available on AWS Marketplace.

SAP also announced one of the world’s largest in-memory database systems with the ability to process 1 petabyte of raw uncompressed data. Additionally, the company has embedded application server capabilities in SAP HANA for developers and launched the “SAP HANA Academy” to enable self-learning at scale.

 SAP is introducing SAP HANA One, a deployment option for the SAP HANA platform available for use in production on the elastic AWS Cloud. SAP HANA One is provisioned by AWS on advanced hardware with memory capacity up to 60 GB of RAM per instance.

Developers can go directly to the online store AWS Marketplace to find, buy and immediately begin using software that runs on the AWS Cloud, in order to provision and instantly access SAP HANA. SAP HANA One on AWS pricing is US$0.99 per hour for the SAP software. SAP plans to offer additional applications that would be available at AWS Marketplace.

Next–Cirro Partners with Hortonworks >


Cirro Partners with Hortonworks

This week Cirro announced a technology partnership with Hortonworks. Hadoop is the recognized platform for storing, managing, processing and analyzing large volumes of data.

The partnership will focus on collaborative engineering and go-to-market activities that further promote Hortonworks Data Platform for enterprise class Hadoop data processing and Cirro’s solutions for on-the-fly access and integration of Hadoop with other enterprise data sources.

“Customers are overwhelmed with the onslaught of Big Data and agree a fundamentally new approach to data access and exploration is required as early market attempts for big data analytics have been met with a multitude of challenges, including complexity and enterprise scalability,” said Mark Theissen, CEO, Cirro.

Now more than ever organizations need easy access to the volume and variety of data that inevitably exist in their distributed data silos, yet generally remains unavailable for analysis. For Hadoop to be a mainstream, enterprise class solution, customers must be able to integrate data in Hadoop or Hbase easily with other data sources (e.g., Teradata, SQLServer, Oracle, Greenplum, etc.) for self-service mash-ups and analysis. Furthermore, customers need to be able to do this with existing business intelligence and data visualization tools that they are using today.

Customers with no Hadoop expertise can use Microsoft Excel 2010 or their favorite business intelligence or data visualization tools to access any type of data, on any platform, in any environment.

 “Cirro’s value proposition of simplifying data exploration and using Hortonworks Data Platform as well as providing integration to other enterprise data sources is extremely beneficial for organizations looking for to achieve success with their Big Data initiatives,” said Mitch Ferguson, vice president at Hortonworks.

Datanami