Follow Datanami:
December 11, 2012

Uber-Cloud Project Floats Massive Data

Wolfgang Gentzsch

The aim of the Uber-Cloud Experiment is to explore the end-to-end process of accessing remote resources in Data Centers (CILEA, FCSCL, FutureGrid, HSR, and SDSC) and in HPC Clouds (Amazon, Bull, Nimbix, Penguin, SGI, and TotalCAE), and to study and overcome the potential roadblocks.

The Experiment kicked off in July 2012 and brought together four categories of participants: the industry end-users with their applications, the software providers, the computing and storage resource providers, and the experts.

End users can achieve many benefits by gaining access to additional compute resources beyond their current internal resources (e.g. workstations), arguably the most important are:

After a fast-paced three months, Round 1 of the Uber-Cloud Experiment concluded last month, with more than 160 participating organizations and individuals from 25 countries, working together in 25 international teams. In this summary Wolfgang Gentzsch and Burak Yenier present main findings, challenges, and lessons learned. And, Round 2 is now open for new participants, in the areas of HPC, CAE, Bio, and Big Data, the latter could be Hadoop-based, for example.

•    the benefit of agility gained by speeding up product design cycles through shorter simulation run times.
•    the benefit of superior quality achieved by simulating more sophisticated geometries or physics, or by running many more iterations to look for the best product design.

During the three months of the experiment, we were able to build 25 teams each with a project proposed by an end user. These teams were: Team Anchor Bolt, Team Resonance, Team Radiofrequency, Team Supersonic, Team Liquid-Gas, Team Wing-Flow, Team Ship-Hull, Team Cement-Flows, Team Sprinkler, Team Space Capsule, Team Car Acoustics, Team Dosimetry, Team Weathermen, Team Wind Turbine, Team Combustion, Team Blood Flow, Team Turbo-Machinery, Team Gas Bubbles, Team Side impact, Team ColombiaBio, and Team Cellphone. The final report, available to all of our registered participants, contains the use cases of many of the teams offering valuable insight through their own words. We look forward to future rounds of the experiment where this accumulating knowledge will yield ever more successful projects.

Roadblocks and lessons learned

Our teams have reported the following main roadblocks and provided information on how they resolved them (or not):

-    security and privacy, guarding the raw data, processing models and the results;
-    unpredictable costs can be a major problem in securing a budget for a given project;
-    lack of easy, intuitive self-service registration and administration;
-    incompatible software licensing models hinder adoption of Computing-as-a-Service;
-    high expectations can lead to disappointing results;
-    lack of reliability and availability of resources can lead to long delays;
-    and more.  

Recommendations from the teams to circumvent some of these roadblocks include: The end-user should start with clearly documenting security and privacy requirements at the beginning of the project.

Automated, policy driven monitoring of usage and billing is essential to keep costs under control. To speed up the resource allocation process, we recommend resource providers to consider setting up queues specific to the end-user needs and assign the queue during the registration process.

Also, resource providers could develop self-service knowledge base tools which increase the efficiency of their support processes. Concerning incompatible software licensing, there are already existing successful on-demand licensing models from some forward looking ISVs which we believe the others can learn from.

To set the right level of expectations, define goals that are incrementally better than the current capabilities of your organization, technology infrastructure and processes. And finally, selecting a reliable resource provider with adequate available resources is paramount. The final report explains these recommendations one by one in detail.

We hope that our participants will extract value out of the Experiment and the final report. They certainly deserve to do so in return for their generous contributions, support and participation. We now look forward to Round 2 of the Experiment with already over 250 participants, and the learning that it will result in. To participate in Round 2 or just monitor it closely, and to receive the final Round 1 report, you can register at http://hpcexperiment.com

Datanami