Change is all around us, whether it be in the political sphere, the value of our personal investments or the joy of seeing our children grow up. Improvement is that rare beast - change for the better. In an ideal world we would wish that all change is for the better. But how would we know?
Clearly a comparison is made between a previous state and the current state, and some scale is used to determine whether things are worse, the same or better. In the author's experience the scale used is invariably qualitative. Things are said to be 'better' or 'easier' than before.
There can even be some qualification of this: 'there's not as much disruption as before,' or 'we don't have as many emergency restarts these days.' But should you ask how much disruption was there before, or by how much is it now reduced, the answer will be a qualitative 'some'.
'Our managers have all gone through the frustration of debating how high are "high" costs, how poor is "poor" quality. They know that once we set up units of measure, the debate shifts from the meaning of adjectives to doing the job at hand, which is as it should be.'1
Joseph Juran wrote these words in 1964, so clearly the recognition of the importance of measurement has been around for some time. However the fact that 75 per cent or more of improvement programmes cannot show a measurable improvement testifies to the fact that the practice is more difficult than the simple realization of its importance.
The ability to say that the process improvement programme has led to annual savings of £1.2 million, dwarfing its £145,000 cost by over eight times in the first year alone, is so powerful that it is amazing that all investment committees do not demand this level of quantitative assurance.
It is clear that without measurement there can be no measurable improvement, but the use of measurement as part of an improvement programme confers much broader benefits.
Many studies have been published on the factors critical to the success of improvement programmes. The author's synthesis of this wealth of material is:
- clarity of purpose;
- clear and measurable goals;
- trained resources;
- relevant reporting.
The rest of this article looks at how including measurement as part of an improvement programme enhances these factors and introduces some basic contents for an organisation's measurement toolkit.
Clarity of purpose
The purpose of measuring is to give data that enables better management decisions to be taken. The information management needs is that which supports their strategic goals and business objectives. If these needs are not clear then similarly the rationale for measurement may not be clear. Often a measurement programme throws up lack of clarity in management objectives.
The Balanced Scorecard2 was developed specifically to give top management teams information on how their organisation was performing against their strategic goals, something purely financial reporting could never do. Kaplan and Norton report that companies are using the scorecard, inter alia, to:
- clarify and update strategy;
- communicate strategy throughout the company;
- align unit and individual goals with the strategy;
- link strategic objectives to long-term targets and annual budgets.
Using the balanced scorecard to identify how the organisation would know if the improvement programme had met its objectives clarifies the objectives of the programme in a way that can be easily communicated to those involved.
Clear and measurable goals
Specific Goal 1 of the Capability Maturity Model Integrated (CMMI)3 Measurement and Analysis (M&A) process area requires alignment of measurement and analysis activities with business objectives: 'Measurement objectives and activities are aligned with identified information needs and objectives.'
Once the strategic intent is clear and aligned, the organisation can move on to specifying clear and measurable goals. The use of the Goal - Question - Measure (GQM)4 approach enables an organisation to start from the strategic goals on the Balanced Scorecard to identify measures that will show performance against those goals. This provides the alignment required, but not the detail.
Specification of measures is a nontrivial activity. We can be clear that we wish to measure the budget performance of projects, but what data do we actually need to gather? Measurement Constructs5 are an excellent way of defining the detail we need for the measures identified using GQM.
But they take time to complete. Even with a good template and experience, it is worth allowing one month per measure to ensure that the definition is right and the organisation has time to absorb it.
Measurement Constructs help us with the detail. We may be clear on what we're measuring, but it can still be unclear how to measure it. When does the week end? Is it 5.00pm Friday or 8.59am Monday? Do we measure in days or hours? These are not difficult things to decide, but there are many and they need to be defined so that the data gathered is coherent across the organisation.
Specific activity 1.2 of the CMMI M&A process area requires exactly this clarity in the definition of measures: 'Specify measures to address the measurement objectives.' The author has used GQM and Measurement Constructs in conjunction to provide a powerful and clear definition of the goals to be achieved.
Specific activities 1.3 and 1.4 of the CMMI M&A process area are 'Specify data collection and storage procedures' and 'Specify analysis procedures' respectively. This provides a sound basis for training all affected people in the organisation on what their responsibilities with regard to measurement and analysis are. Training people on the existence and content of the procedures provides the measurement programme with the following benefits:
- everyone involved is clear on what they are expected to do;
- any problems or lack of understanding is identified and can be fixed early.
This makes an enormous contribution to the success of the programme.
Even where people are clear on the need for improvement, the specific improvements defined and their part in them, as time goes by they can get disconnected from the programme. Their day job takes over and the excitement of being part of the improvement can fade.
In the author's experience the monthly production of the measures defined, maybe as part of some scorecard or dashboard, keeps levels of engagement and enthusiasm high. It also provides a monthly opportunity to evaluate the progress of the improvement programme, and forecast whether it is likely to meet its targets or not. This in itself raises the chances of success.
Specific Goal 2 of the CMMI M&A process area, 'Provide measurement results,' supports this critical success factor.
Made to measure
The discussion above shows clearly that measurement is critical to any successful improvement programme and provides us with a basic toolkit. Yet still measurement programmes are the exception rather than the rule. Why is this? Typical barriers are:
- Don't want to be measured (fear leading to resistance).
- Why are we measuring? (doubt about whether we should be doing this at all).
- What are we measuring? (uncertainty).
- How do we measure it? (more uncertainty).
An improvement programme designed to incorporate the critical factors above eliminates or at least minimizes these barriers. The table below shows the impact (high or medium) of the critical success factors on the barriers.
The author has combined all these elements in an approach that maximizes the success of a measurement programme:
- The Balanced Scorecard clearly identifies and defines what the management information needs are.
- The completed scorecard communicates the intent to the organisations; allow plenty of time for people to question it and invite their involvement as much as possible.
- GQM helps to carefully identify the measures needed.
- Measurement Constructs define the detail.
In all of this, the practices of the CMMI M&A process area act as useful amplification to the Balanced Scorecard and GQM. Such an approach can help overcome the natural barriers that exist to measurement and greatly support the successful execution of improvement programmes.
- Juran JM (1964) Managerial Breakthrough, p219. McGraw-Hill.
- Kaplan RS, Norton DP (1996) Using the Balanced Scorecard as a Strategic Management System. Harvard Business Review, Boston.
- Capability Maturity Model Integration (CMMISM) version 1.1, Software Engineering Institute, Carnegie Mellon University.
- Basili VR,Weiss DM (1984) A Methodology for Collecting Valid Software Engineering Data. IEEE Transactions on Software Engineering, Vol SE-10(6), 728–738.
- John McGarry et al (2001) Practical Software Measurement, p19. Addison-Wesley.