Follow Datanami:
November 11, 2013

Everybody’s a Software Company Now, Pivotal Says

Alex Woodie

The big data revolution is forcing companies of all stripes to rethink how they operate, according to executives with Hadoop distributor Pivotal. Even if you’re a lowly widget maker, you should be aware that the competitive bar just rose another notch.

“In order to compete in the 21st century, everybody needs to be a software company,” Pivotal’s vice president of data platform product management Josh Klahr said at the recent Strata + Hadoop World conference. “Not just the Googles and Facebooks and Yahoos of the world. Not just the enterprise software companies. But even if you’re manufacturing widgets, if you’re creating Frito-Lays, if you’re Wal-Mart–to really win, you have to be a software company. It’s a fundamental premise to drive success in the 21st century.”

Klahr acknowledged he was borrowing a phrase and an idea from Mark Andreessen, the Netscape creator who went on to co-found the venture capital firm Andreessen Horowitz. But it bears repeating.

“What’s happening that’s enabling this?” Klahr asked, before borrowing a quote from Paul Maritz, who in April was named CEO of Pivotal, the Hadoop venture funded by General Electric, EMC, and VMware. “History teaches us that when the data fabrics change, just about everything else in our industry changes,” he paraphrased Maritz as saying.

Pivotal’s vice president of data platform product management Josh Klahr

The data fabric changed 50 years ago, when the IBM mainframe took process automation out of manual world. “It created an entire industry. IBM benefited greatly,” he said. The next great ruffling of the fabric occurred in the early 1980s with the emergence of the relational database management system, which led to the enterprise applications, such as ERP, that are so prevalent today.

“We’re now experiencing that next change in the data fabric,” Klahr said. “Hadoop is a new emerging data fabric that’s at the center of a whole bunch of capabilities that enable companies like GE to build data-driven applications.”

Pivotal is locked onto that concept of a “data-driven company” and a “predictive enterprise,” which is a twist on Cloudera’s vision of turning Hadoop into an “Enterprise Data Hub.” The two companies are espousing the same thing, basically, but Pivotal is expressing it in slightly different ways, with a heavier focus on the technical nuts and bolts that are needed to expose Hadoop’s data and algorithms into an application that actually helps run the business.

Annika Jimenez, Pivotal’s global head of data science services, expressed a similar view during her conversation with O’Reilly Media’s vice president Mike Hendrickson at Strata + Hadoop World. “If you stand back and look at the state of the big data industry, we’re the ones out there saying it’s not enough to just do data–to look at data and build the platform to process and consume data. You actually have to do something with the insights that come out of that,” she said in the conversation, which is available for viewing here.

“In essence we’ve been spending the past eight months reconciling a lot of disparate technologies coming together toward the vision that drove the creation of Pivotal to begin with, which was really bringing together cloud technologies, data technologies, and app technologies,” she said. “Those technologies are coming in from the VMware side of EMC. They’re things like Spring, GemFire, RabbitMQ–a whole slew of things that are about all about application enablement.”

During the show, both Pivotal and Cloudera announced that their Hadoop distributions have been certified to work with Spring XD. The goal with Spring is to provide an abstraction layer that presents all the various Hadoop distributions and projects (such as Hive and Pig) to Java programmers in a consistent manner. The GemFire complex event processing technology is also critical to Pivotal’s strategy to help organizations become predictive.

The lowly widget makers are actually in a strong position in the “everybody’s a software company” world, Jimenez argued. “A lot of the more old school sectors are almost in a position to leapfrog into the Pivotal vision,” she said. “You have these very traditional manufacturing companies that create widgets and machinery, and now they suddenly can instrument these machines, and you can get data capture off the physical object in a very interesting and exciting way that … that stands to immediately enable them to leapfrog into predictive capabilities.”

While big data gives the lowly widget makers a potential game-changing technology, they face as much of an uphill climb as organizations in other industries. According to Jimenez, most organizations are simply paying lip service to the idea of being a “data-driven” organization, she said. “It’s not just about putting your data” in Hadoop, she said. “It’s actually about what you do after you put your data in Hadoop.”

Getting Hadoop installed is the easy part, but actually becoming a data-driven organization requires a lot of hard work. “It’s not said lightly,” she said. “This challenge for companies to become predictive enterprises is all about asking them to look internally and say, ‘Are you organizationally ready to be a predictive enterprise? Are you figuring out where your data miners are, your statisticians, and are you elevating them so they have much more visibility into your work? Are you connecting them to your IT data platform owners, and are you making their work impactful?’

“And if that’s not happening–if there’s this feeling of residual angst about the value coming off of their analytics efforts, then something needs to be done about it. And it’s usually not just a technology play. There’s usually a lot more to it,” she said.

Related Items:

OLTP Clearly in Hadoop’s Future, Cutting Says

GE Sees Low Hanging Fruit in Big Data for Utilities

Pivotal Launches With $105m Investment From GE

Datanami