Follow Datanami:
August 19, 2013

Rebuilding the Data Center One Block At A Time

Scott Firth

With the advent of mobile, social business and big data & analytics workloads, there has been a seismic shift in usage patterns and the way they are handled. These emerging workloads are being optimized, self-managed and automated to a certain degree  through new methods of Cloud service delivery.Local computers no longer have to do all the heavy lifting when it comes to workloads; the network of computers that make up the Cloud handles workloads also. Though Cloud successfully separates the tight connection between physical hardware and software, it does not completely address how workloads are managed and deployed.

Management of changing workload usage patterns needs to be supported by a next-generation, smarter IT infrastructure. One where systems are able to deploy, manage, monitor, scale and repair workload issues automatically. However, most organizations are not ready to optimize a diverse set of workloads.  According to the 2012 IBM Data Center Operational Efficiency study, only 1 in 5 clients have highly efficient IT infrastructures and are able to allocate more than 50% of their IT budget to new projects. This leaves room for alignment between business need, workload management and infrastructure design.

The hardware-driven, traditional datacenter as we know it is facing challenges. What is springing in its place is a software-driven IT environment where infrastructure is delivered as a service and the control of the datacenter is automated by software. While inevitable, the transformation to a software defined environment (SDE) will not happen overnight. To make data centers dynamic, intelligent and analytics-driven organizations will have to walk the last mile in virtualization and make their data centers software-defined.

To develop a software defined environment, organizations will have to break down traditional data centers in a number of ways:

No More Silos in the Data Center

Most IT organizations still manage their data center resources in three silos: compute, storage and network. Going forward, data centers will have to optimize the entire computing infrastructure as one – compute, storage and network resources – so that it can quickly adapt to the type of work required. Workloads that are typically assigned to resources manually in a siloed environment will instead be delivered automatically to the best available resource, which could be an under-utilized server or a closer compute node.

Workloads are deployed based on “patterns of expertise,” or pre-defined configurations of applications and the centralized control system automatically defines virtual resources with the required configuration and capacities (no guessing involved). Software maps the virtual resources to the workload and deploys the workload. And there is continuous, dynamic optimization and reconfiguration to tune the infrastructure to respond to changing demand. Underlying all of this are policy-based checks and updates, to ensure security and compliance.

To be most effective, organizations have to be workload aware. Meaning, it will have to use its application expertise to take into account the unique requirements of different application types ranging from Web 2.0 services, to Big Data Hadoop workloads, to traditional 3-tier applications. 

Open Architecture

Ninety percent of CIOs view cloud as critical to their plans according to the 2012 IBM CEO study. Organizations will increasingly adopt a hybrid cloud environment and will demand data center architecture where both public and private clouds can be seamlessly integrated. IT infrastructure will need to be built on open ecosystems to enable management interoperability across different vendor solutions via API extensions. This is the reason why IBM is building its software defined solution on OpenStack and will support heterogeneous compute (KVM, Hyper-V, ESX, Power VM), network (OpenFlow, Cisco, Juniper) and storage (IBM, NetApp, EMC) environments. 

Integrate of Skill Sets

A software defined environment will require not only a technological shift, but also a cultural shift within organizations. It will bring down the wall that exists between developers and IT operations, which will reduce the time lag between application design and production. SDE will allow application developers to have more freedom in managing the application life-cycle and understanding workloads. DevOps teams that are more aware of business needs will control data centers and bring about more efficiency and agility within businesses.

Bring Service Discipline to Data Centers

Development and management of software defined environments require a services discipline: standard processes, procedures, roles and responsibilities. A software defined infrastructure is built top-down, block by block based on a layered architecture.  Business needs and workloads are captured in software patterns, which in turn are mapped to infrastructure patterns that then define the hardware stack. The entire IT environment is also dynamically configured by analytics-based optimization, security and service-level policies. Maintaining and managing such a layered infrastructure will require discipline and key operational and process activities will have to be standardized.

Software Defined Environments if built on the right resources with the right business input can have tremendous business impact. Workload-driven SDE can manage massive scale, unpredictable transactions and increasing complexity without disrupting the whole operation. This can make businesses more agile and take advantage of unanticipated market demands giving them a considerable edge over competition. 

About the Author

Scott Firth is Director, Marketing – Software Defined Environment in IBM’s Systems and Technology Group.  In this role, Mr. Firth leads a team responsible to demonstrate how IBM’s IT infrastructure software and solutions help clients create and deliver business value.  He has been a recognized expert in virtualization topics since starting with IBM as an engineer designing virtualized systems over thirty years ago.  Since then he has held a wide variety of IBM sales, services, marketing and technical leadership positions across the USA and Europe.

Related items:

Facebook Advances Giraph With Major Code Injection 

Watson Moves Into the Call Center 

Big Transaction Data Creating Big Value 

Datanami