Language Flags

Translation Disclaimer

HPCwire Enterprise Tech HPCwire Japan
Leverage Big Data'14

August 19, 2013

Rebuilding the Data Center One Block At A Time


With the advent of mobile, social business and big data & analytics workloads, there has been a seismic shift in usage patterns and the way they are handled. These emerging workloads are being optimized, self-managed and automated to a certain degree  through new methods of Cloud service delivery.Local computers no longer have to do all the heavy lifting when it comes to workloads; the network of computers that make up the Cloud handles workloads also. Though Cloud successfully separates the tight connection between physical hardware and software, it does not completely address how workloads are managed and deployed.

Management of changing workload usage patterns needs to be supported by a next-generation, smarter IT infrastructure. One where systems are able to deploy, manage, monitor, scale and repair workload issues automatically. However, most organizations are not ready to optimize a diverse set of workloads.  According to the 2012 IBM Data Center Operational Efficiency study, only 1 in 5 clients have highly efficient IT infrastructures and are able to allocate more than 50% of their IT budget to new projects. This leaves room for alignment between business need, workload management and infrastructure design.

The hardware-driven, traditional datacenter as we know it is facing challenges. What is springing in its place is a software-driven IT environment where infrastructure is delivered as a service and the control of the datacenter is automated by software. While inevitable, the transformation to a software defined environment (SDE) will not happen overnight. To make data centers dynamic, intelligent and analytics-driven organizations will have to walk the last mile in virtualization and make their data centers software-defined.

To develop a software defined environment, organizations will have to break down traditional data centers in a number of ways:

No More Silos in the Data Center

Most IT organizations still manage their data center resources in three silos: compute, storage and network. Going forward, data centers will have to optimize the entire computing infrastructure as one – compute, storage and network resources – so that it can quickly adapt to the type of work required. Workloads that are typically assigned to resources manually in a siloed environment will instead be delivered automatically to the best available resource, which could be an under-utilized server or a closer compute node.

Workloads are deployed based on “patterns of expertise,” or pre-defined configurations of applications and the centralized control system automatically defines virtual resources with the required configuration and capacities (no guessing involved). Software maps the virtual resources to the workload and deploys the workload. And there is continuous, dynamic optimization and reconfiguration to tune the infrastructure to respond to changing demand. Underlying all of this are policy-based checks and updates, to ensure security and compliance.

To be most effective, organizations have to be workload aware. Meaning, it will have to use its application expertise to take into account the unique requirements of different application types ranging from Web 2.0 services, to Big Data Hadoop workloads, to traditional 3-tier applications. 

Open Architecture

Ninety percent of CIOs view cloud as critical to their plans according to the 2012 IBM CEO study. Organizations will increasingly adopt a hybrid cloud environment and will demand data center architecture where both public and private clouds can be seamlessly integrated. IT infrastructure will need to be built on open ecosystems to enable management interoperability across different vendor solutions via API extensions. This is the reason why IBM is building its software defined solution on OpenStack and will support heterogeneous compute (KVM, Hyper-V, ESX, Power VM), network (OpenFlow, Cisco, Juniper) and storage (IBM, NetApp, EMC) environments. 

Integrate of Skill Sets

A software defined environment will require not only a technological shift, but also a cultural shift within organizations. It will bring down the wall that exists between developers and IT operations, which will reduce the time lag between application design and production. SDE will allow application developers to have more freedom in managing the application life-cycle and understanding workloads. DevOps teams that are more aware of business needs will control data centers and bring about more efficiency and agility within businesses.

Bring Service Discipline to Data Centers

Development and management of software defined environments require a services discipline: standard processes, procedures, roles and responsibilities. A software defined infrastructure is built top-down, block by block based on a layered architecture.  Business needs and workloads are captured in software patterns, which in turn are mapped to infrastructure patterns that then define the hardware stack. The entire IT environment is also dynamically configured by analytics-based optimization, security and service-level policies. Maintaining and managing such a layered infrastructure will require discipline and key operational and process activities will have to be standardized.

Software Defined Environments if built on the right resources with the right business input can have tremendous business impact. Workload-driven SDE can manage massive scale, unpredictable transactions and increasing complexity without disrupting the whole operation. This can make businesses more agile and take advantage of unanticipated market demands giving them a considerable edge over competition. 

About the Author

Scott Firth is Director, Marketing - Software Defined Environment in IBM’s Systems and Technology Group.  In this role, Mr. Firth leads a team responsible to demonstrate how IBM’s IT infrastructure software and solutions help clients create and deliver business value.  He has been a recognized expert in virtualization topics since starting with IBM as an engineer designing virtualized systems over thirty years ago.  Since then he has held a wide variety of IBM sales, services, marketing and technical leadership positions across the USA and Europe.

Related items:

Facebook Advances Giraph With Major Code Injection 

Watson Moves Into the Call Center 

Big Transaction Data Creating Big Value 

Share Options


Subscribe

» Subscribe to our weekly e-newsletter


Discussion

There are 0 discussion items posted.

 

Most Read Features

Most Read News

Most Read This Just In

Cray Supercomputer

Sponsored Whitepapers

Planning Your Dashboard Project

02/01/2014 | iDashboards

Achieve your dashboard initiative goals by paving a path for success. A strategic plan helps you focus on the right key performance indicators and ensures your dashboards are effective. Learn how your organization can excel by planning out your dashboard project with our proven step-by-step process. This informational whitepaper will outline the benefits of well-thought dashboards, simplify the dashboard planning process, help avoid implementation challenges, and assist in a establishing a post deployment strategy.

Download this Whitepaper...

Slicing the Big Data Analytics Stack

11/26/2013 | HP, Mellanox, Revolution Analytics, SAS, Teradata

This special report provides an in-depth view into a series of technical tools and capabilities that are powering the next generation of big data analytics. Used properly, these tools provide increased insight, the possibility for new discoveries, and the ability to make quantitative decisions based on actual operational intelligence.

Download this Whitepaper...

View the White Paper Library

Sponsored Multimedia

Webinar: Powering Research with Knowledge Discovery & Data Mining (KDD)

Watch this webinar and learn how to develop “future-proof” advanced computing/storage technology solutions to easily manage large, shared compute resources and very large volumes of data. Focus on the research and the application results, not system and data management.

View Multimedia

Video: Using Eureqa to Uncover Mathematical Patterns Hidden in Your Data

Eureqa is like having an army of scientists working to unravel the fundamental equations hidden deep within your data. Eureqa’s algorithms identify what’s important and what’s not, enabling you to model, predict, and optimize what you care about like never before. Watch the video and learn how Eureqa can help you discover the hidden equations in your data.

View Multimedia

More Multimedia

ISC'14

Job Bank

Datanami Conferences Ad

Featured Events

May 5-11, 2014
Big Data Week Atlanta
Atlanta, GA
United States

May 29-30, 2014
StampedeCon
St. Louis, MO
United States

June 10-12, 2014
Big Data Expo
New York, NY
United States

June 18-18, 2014
Women in Advanced Computing Summit (WiAC ’14)
Philadelphia, PA
United States

June 22-26, 2014
ISC'14
Leipzig
Germany

» View/Search Events

» Post an Event