How can you tell what value IT is adding to your organisation?

Sir Andrew Likierman gives some practical measurement advice.

Are you worried about insistent calls for IT to show how it is adding value? Or annoyed by uninformed assumptions that everything should be outsourced?

It’s tempting to be impatient or irritated, but both reactions need to be resisted - you need to look again at your performance measures.

Good measures are essential to make sure that IT can decide between priorities and plan its activities. They are also crucial to making a convincing case in the annual budgeting round. Yet a Deloitte survey of 300 chief information officers last year showed that cost and productivity measures were most frequently used. Neither is up to the job.

Cost ratios (say cost of IT relative to turnover or IT cost per unit of output) have value as a reality check but are hopeless as performance indicators. They give no indication of how well the money is being spent - it might be better to spend more money, not less. Ratios are also difficult to compare when some spending is decentralised.

Productivity is interesting to compare to budget if used together with other measures, but has big limitations. Are more customer visits good? Or are they to remedy faults that should not have been there in the first place? Fewer or quicker jobs are great for cost cutting (how about getting rid of IT altogether, it will save loads of money?) but performance isn't about saving time.

Measures carrying far more weight are those related to results. A system that is always down is certainly not performing. And rave reviews from users show that something is right. Yet even these have to be handled with care.

In fixing target downtime, have the trade-offs been well-judged? And who will make the judgement? Similarly the users may be enthusiastic, but are they getting quality at a disproportionate cost?

In most organisations neither the senior management nor the users are sufficiently well-informed to know how well the function has handled key trade-offs. These problems can be addressed. Set out below are four ways to do so:

  1. Connect the performance clearly to the objectives of the organisation as a whole.
    The CIO needs to be in the lead, not only in translating objectives into IT requirements, but through guidance on what needs to be done at budget time.

    Both mean providing the appropriate level of sophistication for what users need and can use, not meeting the CIO's personal fantasy wish-list.

    It means agreement with finance about cost, speed and reliability trade-offs. Colleagues will have to rely on IT's judgement and expertise here and it s crucial to build trust.

    Target levels must also take account of agreed risk levels. Avoiding all risk (absolutely minimal downtime for IT or guaranteed immediate availability of technical advice) could be very expensive.

    Senior management must help by specifying what risk they are prepared to tolerate.

  2. Improve the sophistication of measures and the way they are used.
    The move from activity and cost measures to something better starts with considered, agreed targets.

    For example, the percentage of target hours an application is fully operational should reflect understanding by IT and users of the balance of needs between service requirements, cost and resources.

    More detail is another way to improve sophistication. Response times need to reflect types of call; utilisation targets should reflect different skills; jobs should distinguish the essential from the optional.

    An IT scorecard may also be worth considering as a means of gathering measures into a single framework. But don't give all measures equal weight - use few and of high quality.

    The common danger of scorecards is ending up with too many measures to handle, with resulting confusion about priorities.

    Useful comparisons are essential to achieve more sophistication, especially for organisations too small to have their own CIO.

    Comparisons with other IT functions are obviously desirable if relevant and available, say with other divisions within a large multinational or through a benchmarking club.

    But in most cases comparisons of the whole function will not be relevant (for example because of very different development or security profiles) or the data may not be publicly available. So it is often best to compare elements of the function.

    Dealing with worms or phishing, incident or problem management, the level of desktop support, handling of projects and take-up of new techniques are just some possible areas.

    Useful comparisons can also be made with parts of other service functions inside your own organisation.

    How about talking to finance about approaches to dealing with complaints, to procurement about response times or to the research people about how to quantify costs and benefits?

  3. Improve the quality of feedback to and by IT.
    Senior management don't know what they don't know. Asking whether IT gives a 'good' service is not enough if they wouldn't recognise a poor one.

    Questionnaires are valuable, but key customers need to be identified and then asked for the right information.

    Questions should cover both process ('How prompt was the response to your requests for help?') and outcome ('Does the service fully meet your requirements?').

    There should also be a question at the end asking whether the person has recent experience of IT systems elsewhere. This puts the answers in context.

    Questionnaires must be supplemented by regular informal discussions with key customers and a sample of other users.

    These will help give a better feel for how the function is seen than answers on forms. They also give the chance to brief on what is going on, choices faced by IT and how the organisation can use IT better. Such discussions provide other opportunities.

    One is to educate users about why a service level has been chosen. It is no use deciding on a four-hour turnaround when an instant response is expected.

    Another opportunity is for discussion on how to improve specification and handling of projects - which is all too often IT's Achilles heel.

    A good measure of confidence in the function is Board reaction when new proposals come up.

  4. Recognise the limitations of measures.
    Where possible, mitigate them. Some measurement problems can be overcome - the quality of data, the relevance of comparisons and the usefulness of feedback.

    But some can only be mitigated, as with the estimates of non-monetary benefits from new equipment. Some improvements may be possible but not cost-effective.

    Mitigating the problems should be done through a first-class commentary to accompany the figures. First-class means lucid, focused, balanced, concise and jargon-free.

    Where measurement is not possible, such as success in reducing attacks, proxies may be inevitable - the installation of the latest patches and network configuration as proxies for a state of readiness. But proxies will always be second best and the links to what is being measured will need to be properly explained.

    Any savings claims must also be robust to challenge by finance, not least because a claim that is successfully challenged  undermines the credibility of the whole measurement process.

In conclusion

Better performance measurement will not only hugely improve IT management, it will raise the quality of debate about strategic and operational choices through greater understanding and credibility.

IT functions must be active in generating good measures, not the passive victims of poor ones.

Sir Andrew Likierman is professor of management practice at the London Business School where he is working on ways to improve the quality of measurement in management. Email: alikierman@london.edu

In a nutshell

  • Good measures are essential to make sure that IT can decide between priorities and plan its activities.
  • Measures carrying far more weight are those related to results.
  • Colleagues will have to rely on IT's judgement and expertise - it is crucial to build trust.
  • An IT scorecard may also be worth considering as a means of gathering measures into a single framework.
  • Questionnaires are valuable, but key customers need to be identified and then asked for the right information.
  • Any savings claims must be robust enough to answer a challenge from finance.