Today supercomputer maker Cray expanded on their previous assertions that big data would soon become an even more prominent part of their business model with the introduction of a new division dedicated exclusively to it.
While it might not be easy on the ears to hear the name spoken, Cray expects that this new subset, called (ahem) YarcData, will be music to the ears of those in traditional HPC verticals (life sciences, government and financial services) in need of rapid handling of massive datasets.
They also expect that the eventual fruits of the YarcData labor will draw new markets into the HPC fray, but as of now it’s hard to gauge which verticals would be most attracted to the supercomputing scent.
What’s most important about the company’s big data focus, suggests Cray, is that the world recognizes that the Seattle company is not a one-trick pony when it comes to big systems. While many in the HPC space are well aware of that and recognize that they’re behind many of the top “off the record Top500” systems in the world, the fact is, to the outside world, many hear the name Cray and immediately think of the monolith (out of reach) supercomputers.
As the company’s head and his new brainchief in charge of running the YarcData mission noted during a conversation with us this morning, they are focusing on creating purpose-built solutions with high-end hardware and software to back them. This goes well beyond HPC—and makes Cray, arguably for the first time in its extensive history, within slightly closer reach to the mainstream enterprise computing crowd. In theory, at least.
Cray CEO, Peter Ungaro, and his the key man in the YarcData news today, Arvind Parthasarathi—a fresh convert to the HPC side of the fence from Informatica-provided us with a pretty detailed sense of just how big of a deal this big data focus is for Cray.
Prior to joining Cray, Parthasarathi helped oversee the Informatica business unit that focuses on many verticals that are familiar to HPC vendors. In his role as a VP within the Master Data Management he addressed the big data needs of enterprise life sciences, financial services, retail, manufacturing, healthcare and government organizations and also has some Oracle and general data management and integration credentials under his belt.
During our chat this morning, Parthasarathi made it clear that it is a big change for him coming into a hardware-focused company but Ungaro quickly picked up on this point, noting that there are some significant developments coming in software—some that go far beyond mere Hadoop or commodity cluster solutions that come pre-packaged with a one-size-fits-all big data stack.
With Parthasarathi at the helm, Ungaro says this will help the supercomputing company push off from its position in ultra high-end gear, propelling it in new directions on both the hardware and software fronts. This goes far beyond slapping Hadoop on a cluster and carting it out for mass consumption, he said (presumably as an answer to how Oracle and SGI are doing the HPC/big data dance).
The big question that we wanted to get to the heart of was more specific; just which of those supercomputing technologies were going to be wrapped into their big data push—or was there something entirely new and unseen happening, say up in Chippewa Falls, WI where there is little to do from, say October until May, but keep warm with some extensive R&D.
While Ungaro kept mum about the actual products that will roll out of the YarcData camp, there will be news around the corner in the first half of this year. Both execs continually stressed that for now, the idea of products for big data is a concept tied to customized, tailored solutions for particular verticals or, presumably applications. As Parthasarathi said adoption of any coming products “has to be tied to the solutions; we don’t want to come up with a simple ‘box’ and send that out to everyone who needs this solution the same—we don’t want the customers to find out how that neat box fits into their needs, it needs to be the other way around.”
It’s worth noting that this is what Ungaro calls a “very big” part of Cray’s future as far as he sees it. Their focus on big data analytics, which is set to begin with the life science in particular, will be the subject of “extensive” funding—not just in R&D, but in sales, marketing and all tech-tangential parts of the $145 million company.
“Cray is best known for building supercomputers that can run massive scientific and engineering simulations, and from that work we have developed unique technologies and amassed significant experience working with some of the largest data-intensive environments in the world,” said Ungaro. “This makes our entry into the big data market a natural evolution. I am extremely excited to have a proven executive like Arvind helping us bring our leading supercomputing technology to enterprise customers that are trying to gain insight and harness value from the explosion of data happening in their businesses today.”
To get the blood boiling, we asked Ungaro and Parthasarathi whether or not it was out of the question to expect a big announcement in 2012 that features a giant web-based retailer using supercomputing technology to handle big customer data. Would that be the real proof to the still sizable majority that believes HPC is not permanently relegated in the “untouchable” technology camp—reserved only for governments, big banks and universities? This is not an entirely off-center question since the Cray mothership is just around the bend from retail giants like Amazon, among others.
While that query got some optimistic laughter, just how far off might such a use case be? Are we nearing the age of built-to-order supers powering “run of the mill” commercial sites and if so, would we still feel comfortable calling them “supercomputers” or would it require a more general moniker.
And on that note, it had to be asked—why not just carry the name Cray around? After all, few HPC companies have that kind of brand legacy and, well, let’s face it, iconic status. Why not wield that like the weapon it could be? While spelling Cray backwards and putting a “Data” on the bumper will work—it seems that Cray might be not be maximizing its legacy as a high-end technology vendor, choosing as it is to take the approach that a no-name startup would in terms of branding.
Although Ungaro and Parthasarathi wouldn’t say anything beyond the fact that YarcData is its own division in need of its own demarcation, this is a rather notable approach—one akin to a company like Kraft—the company every red-blooded American equates with the finest orange-sauced pasta-- making a new, more wonderful type of macaroni and cheese but putting it out under a new brand name no one’s heard of. Why do it when you’ve got the brand power few dream of? All mac parallels aside, it will be interesting to see how this internal division develops as a mini company—and to see if anyone ever calls it just YarcData without the obligatory, “a division of Cray.”
Hopefully we’ll catch up again for a video interview with Ungaro and his team soon—perhaps at ISC in Germany in June. On that topic—we wanted to point to a video interview I did at SC11 where Ungaro hints at the big data play the company had in the works well before this video was ever shot.