Follow Datanami:
February 21, 2012

Making the Business Case for Big Data

Bryan Clark

Big data analytics is definitely the latest clever must-have stuff from the technology and consultancy vendors, but what is it for? Is it relevant to your business and who should be paying attention to it?

You may have heard something like: “Analytics will monetize your business data.” A big claim, indeed, but let’s see, by way of example, how this can sometimes be true.

As is often the case, it is “computer people” who are mainly making the big claims for big data. So perhaps enterprise IT is a good example to start with in establishing whether this hidden value can be revealed.

Consider your budget strategy this year: revenues look soft, the balance sheet is in reasonable shape, but nobody seems to feel very confident. The consensus is that it is a good time to play things safe and almost definitely hang on to the cash.  At the IT budget review meeting, the opinions from a range of perspectives might be something like:

  • CEO; (over-view): We need to innovate, spend less on running the shop, a bigger percentage on new capabilities to make us more efficient and win new customers.
  • CFO; (top-down):  Costs need to reduce, there is no appetite for capital investment and if things continue like this we’ll be reviewing our costs again at the end of the first quarter.
  • COO; (end-to-end): Services must be excellent, no possibility of failure and if business picks up, we need to scale up quickly. In the mean time, IT needs to take a smaller percentage of the operating budget.
  • CIO; (bottom-up): I wonder how I am going to do that?

Add into the mix that over the past three years: costs have been very tightly managed; suppliers have been squeezed; growth in demand for IT has absorbed most of the savings coming from technical improvements and a series of business transformation projects have re-directed monies that would have otherwise been spent on renewal of base capability.

The challenge this year is compounded by the past three years. And on top of that – there is increased expectation of improvements in both flexibility and service levels – based on the consumer experience of IT.  It is getting harder to make savings.

Lets look at how Analytics might help – and by how much. For example, in the financial services industry this year you would be average if you spent 30% of your IT budget on a data centre and data network. Which in general might equate to around $7.5k per employee per annum.  If you could find a way to reduce that cost by 15%, then, say, for a 10,000 employee business, that is over $10m savings from an overall budget (opex + capex) of $250m. Obviously these are hypothetical numbers – though in many cases they will be realistic.  And while these are not astonishing savings – they are most certainly worth having – if not exceeding.

Making a list of ways to get at these putative savings throws up issues for the CIO, for example:

Savings Ideas

Issue

Cancel all data centre CAPEX

What will run out of capacity first? When will the lights go out?

Put it in the cloud

How big a cloud? – (Or if you are already there: have I got the right size cloud?)

Turn stuff off

Will our business still run?

Reduce service levels a bit (but not too much)

What will happen to services if we spend less?

The ability to answer these types of issues will determine whether a business can achieve the savings it would like. One reason to be cheerful is that the basic data necessary to answer these questions are already owned by the IT team. 

Most systems keep copious records of everything they do – but these logs are only rarely examined and aren’t that human friendly as a light read. Furthermore, each log is an island to itself – it doesn’t relate to the other logs easily. What is needed is a tool to bring these data together, join the dots and make the picture clear of how all this technology maps to business throughput. Then we can create the insight necessary to impact the business’s bottom-line – big data analytics.

Lets zoom right in to a single system and show how this could be done:

The diagram shows a simple multi-tiered e-commerce application:

Logs of most everything that happens are available for each of the main components of the system. If we can join these logs together we can trace each customer transaction through the overall system and produce an analysis which shows the end-to-end throughput of the system in business terms, i.e. what was delivered, what it took to do it and how quickly it happened. Having done this, we might, as a first step, plot some graphs to understand broadly what was happening in our business and our computer systems:

Interesting (if you like that sort of thing) – but not enough.

With a little bit of statistical analysis on the data we have collected, we can go further and establish the relationship between the demand from our business and the computer resources used in each system component and plot a graph of how our components scale against business need.

This gives us a predictive model of what happens to each system component when  business demand changes. If we have the tools then we can do this for every system component. Getting there.

The final technical step in this simple example is to use the predictive models we have built to show how much headroom we have in each component. In this example, we can see immediately that the current infrastructure provisioning is wildly unbalanced:

So, who ordered those web-servers? …. Lets not go there.  Rather, the GOOD NEWS is that we could more than double the number of transactions  processed  – However, the other components are way oversized – which in a topsy-turvy sort of  way is also GOOD NEWS because we have options to either re-purpose the over-capacity or simply downsize. If this is done as part of an analysis prior to migrating to a cloud based service the savings achieved by re-sizing as part of the migration will be considerable. It seems clear that 15% target savings can easily be achieved in this system. We can say exactly what needs to be done to manage our costs – and how much that is worth is easy to calculate. The above example is not unusual in typical enterprise systems.

Now lets zoom right back out and look how this could be extended over the whole of the IT estate. The following time-line shows how a large IT infrastructure can be analysed as a basis for a consolidation programme designed to reduce cost, deliver predictable service levels based upon business metrics and known headroom for growth.

We might decide to spread the consolidation plan over a period of time. If we compare this with our baseline costs we can see a positive NPV over our planning horizon:

 

 

Year 1

Year 2

Year 3

Current Costs ($M)

Data Centre

48

44

37

Users

39

39

39

Networks

41

39

35

Applications

86

86

86

Overhead

36

36

36

Total

250

244

234

         

Savings
($M)       (%)

DC

7.5

15.0

15.0

Networks

5.0

10.0

10.0

       

DC

3.6

6.6

5.6

Networks

2.1

3.9

3.5

         
 

Total Savings

5.6

10.5

9.1

 

Implementation Cost

(1.0)

(0.5)

(0.5)

 

Net Savings

4.6

10.0

8.6

         
 

NPV ($M @ 10% Discount Rate)

 

 

19.0

The savvy CIO who has been through their budget review with something like the above may well be tempted to start thinking how they could use the same approach elsewhere in the business. A good starting point might be a series of “did-you-know’s”. Such as:

  • If we reduced the transaction times on our forex trading by 50% we would quadruple our revenue. Shall we build a business case for the investment?
  • We will not achieve our planned 30% savings in Finance & HR unless we size our offshore SSC correctly – have we got the right capacity to deliver our business processes reliably?
  • Low cost IT cloud provisioning is quick and cost effective when a business unit needs immediate capability for a new product launch – as long as it is de-commissioned afterwards. Have we remembered to do that?

And so on. I am sure you get the idea: Analytics is not just for IT – it can deliver business benefit throughout the organization – our example here is just that, an example but one designed to show that Big Data really do contain valuable information – as times get harder it gets harder to find ways to improve your business. Companies that take the time to mine their data for the value will do better.

Big Data Analytics won’t fix everything – but in many cases it will certainly help. Its about understanding complex operating relationships on a large scale, it is certainly relevant to large businesses that want to lower costs without compromising their ability to deliver. Definitely, you should pay some attention to it. For starters, you might set yourself the target of reducing your cost per head for IT by $1000.

Bryan Clark is CEO of Sumerian – an established IT Analytics business. Prior to that he was a Partner and CIO for KPMG ELLP.  He has over thirty years experience working in IT and is passionate about deriving business value from technology.

Datanami