Follow Datanami:
May 23, 2014

Datanami’s Leverage Big Data Summit Wraps Up

Dialog and networking were on the Datanami agenda this week as we kicked off our inaugural Leverage Big Data summit at the Park Hyatt Aviara Resort in Carlsbad, California.

Where concerns were heating up last week around wild fires raging in the surrounding area, at this week’s summit, concerns were aimed at the data and infrastructure challenges that IT heads face as they transition their organizations towards the new data reality.

Kicking off the program was keynote speaker Jack Collins, Director of the Advanced Biomedical Computing Center at the Frederick National Laboratory for Cancer Research, who discussed the potential for the big data trend to revolutionize medicine (among other fields). Collins set the tone of the event discussing the challenges that bio scientists are up against, highlighting some of the successes they’ve had with big data (including BioDBnet, which combines Hadoop and graph analytics to create an uber-biological database from various scattered resources).

Jack Collins, Director of the Advanced Biomedical Computing Center at the Frederick National Laboratory for Cancer Research

Jack Collins, Director of the Advanced Biomedical Computing Center at the Frederick National Laboratory for Cancer Research

Collins also warned of another impending data flood that bioscientists face, noting that the rise of new imaging technologies and their various applications will create a virtual morass for bio-IT professionals to wade through in the near future. One of the examples he cited was Eulerian-based image processing, which he demonstrated to show how such technologies could be used to detect heart rates in premature infants.

One of the themes emerging from the discussions on stage was the rage against the term “big data” to describe the disparate technologies that make the technology trend. In a summit hall filled with professionals spanning the spectrum of industries and disciplines, there was much discussion over the term and what it means – and to whom.

“I serve private sectors varying from manufacturing to life sciences, IT, agriculture, and others – these sectors today disagree on what the definition is for big data,” explained Merle Giles, Director for Private Sector Programs and Economic Impact at the National Center for Supercomputing Applications (NCSA) at University of Illinois at Urbana-Champaign. “The most recent (definition) I heard, which I like actually, is that if the data is too big to move, that’s big data.”

Professor Kirk Borne of George Mason University

Professor Kirk Borne of George Mason University

During his keynote, titled “From Astrophysics to Airlines – The Big Picture of Massive Data,” Astrophysicist and Professor Kirk Borne of George Mason University unveiled his most recent definition of big data. “My definition of big data is just these four words: everything quantified and tracked,” explained Bourne. “We have data that’s more difficult to handle than we did before. We now have this whole-population analytics – the end of demographics.”

“I don’t care about ‘big data,’” shot Jack Levis, Senior Director of Process Management at United Parcel Service (UPS) during a panel discussion. “I really don’t. I care about the insight, I care about the impact. I talk to lots of organizations, and I see a lot of people chasing big data, but you got to think like the dog who is chasing the car, what are you going to do once you catch it? You’ve got to figure out how you’re going to use it – that’s where analytics comes in, that’s where operations research comes in, and that’s where process comes in… I care about insights, I care about ROI, and I care about results.”

“As people try to tackle the ‘hyperscale analytics’ problems – I hate calling it ‘big data,’ so I’m not going to – software engineers start thinking software and ‘what algorithm is going to do this,” explained Ari Berman, Director of Government Services with BioTeam. “The hardware geeks in the datacenter are focused on how they are going to attack it with hardware. What needs to happen is the convergence of those two.”

Other themes were discussed during the event, including the talent gap that exists and that makes it difficult for organizations to leverage the data they’re collecting, as well as some of the challenges with the tools and processes that need further research and development.

Nicole Hemsoth, HPCWire managing editor, moderates a panel at this week's Leverage Big Data conference.

Nicole Hemsoth, HPCWire managing editor, moderates a panel at this week’s Leverage Big Data conference.

During the breakout boardroom sessions, discussions focused largely on the infrastructure challenges (both hardware and software) that individual attendees were faced with as they worked to build out complete analytic solutions for their own organizations. Attendees got the chance to examine and discuss the various re-inventions and innovations that are happening at the infrastructure layer to address the data and processing tide that organizations are facing.

The show closed with an informal poll revealing that the audience attending the event largely felt as though the entire trend was advancing from the “peak of inflated expectations,” and into the “trough of disillusionment” in the Gartner hype cycle chart.

Leverage_160x160FB2

View pictures from the event.

Datanami will be having more stories coming from this inaugural event. In the meantime, if you are interested in participating or attending the 2015 event, contact us by submitting an attendee interest form at the Leverage Big Data website.

Datanami