There - I’ve finally said it in public.
My reasoning behind this is twofold. Firstly there is the problem of complexity and secondly there is the issue of necessary off-plan activity that is essential but which plans do not capture.
Taking these in reverse sequence, it is clear, especially for data migration projects, that a lot of what we do comes under the heading of unplanned activity. How can this be? Well I always say that data migration planning is easy - we are asked to plan the move of data from undocumented and unknown systems, of unknown data quality, to an undefined target. No planning problems there then!
So how do we manage this?
Well, using PMDv2 we can have a period of landscape analysis which aims to map out the answers to the first half of the problem. We then have a slice of gap analysis and mapping that closes in on the second before we can drop into migration design and execution by which time we should be in a position of knowledge that allows us to plan down to the lowest level of detail.
However lay that on a time line and you see that those first two stages are going to consume 80 per cent of the elapsed time. So 80 per cent is either unplanned or we make a show of planning. This show of planning is like the early medieval maps of the world where whole continents were missing and of those that were present the interiors of Africa and the America’s were blank.
Of course in PDM we expend probably more than half our effort in the DQR process managing all the data quality and data preparation activities in a joint venture with our business colleagues. All of this is reactive to what we find in the source and what is required by the target.
I realise that data migration is something of an oddity but on the bigger programme of which it is a part there is a mirror in the risks and issues management process and, of course, in the planning process itself none of which is to be seen on published plans.
Now let’s return to the question of whether a large programme is plannable or not even in theory. I have been on projects where the plans run into tens of thousands of lines and a dozen or more full time equivalents are employed to maintain them. Also recently I have started taking an interest in chaos theory.
Under chaos theory it is demonstrable that in some cases a system that is governed by relatively simple linear equations which are sensitive to their starting values can have an outcome that is in principle unpredictable. Weather systems are the classic example of this. We can predict the weather 12 hours ahead with a degree of accuracy and our three day forecasts are quite good but beyond that the slightest change in starting values can have a huge impact on outcomes, so long range weather forecasts of even the most general kinds are just not reliable.
This of course has a parallel in our planning efforts. Plans are made of cascading activities that are quite simple linear equations with a limited number of starting values - start date, duration, effort, number of resources etc. (I am aware that Microsoft, in their desire to enhance the perceived value of their product, have added a load more to project but that only reinforces my argument).
The result of this is one of which we are all aware. You turn on auto-scheduling at your peril. Most of us know what the end date is, we bung in a load of activities then we go through the various views to level the plan. When that doesn’t work we go back and fiddle the values until we get the result we want - the one we intuitively know is right (or the ones our masters demand of us). Turn on auto-scheduling and the plan explodes going from 12 months to 36 months elapsed time in an instant.
As part of my summer reading I’m going to be taking away some stuff on chaos theory so for now I’ll leave this as an interesting parallel - I can’t say for certain if it is really applicable. But it is also true that once in flight any small change in one part of the leviathan plan can have huge impacts elsewhere. The butterfly flapping its wings indeed!
Now it could be argued that this is exactly what a robust planning activity should support. All those unforeseen connections that impact on the critical path should be exposed. And I might agree with that proposition except for a theoretical and a practical consideration.
Given the inherently chaotic nature of a huge plan is it possible, even in theory, to sensibly attempt to create and link all those lines with the correct weightings, durations, allocations etc? Just like the weather we can plan accurately a few days out but as to what happens in three months time?
Is it even theoretically possible? And secondly the practical response of the programme management office whenever I cause havoc in the programme plan with some well intentioned adjustments is to a) ban me from going anywhere near the master plan ever again and b) to fiddle it back to a position that matches what the Programme Director wanted in the first place. So you see I don’t think even the most fundamentalist priests at the temple of MSP really believe they can allow the project to be managed by the plan.
I will return to this topic in my next blog when I’ll look at the alternatives to these behemoth plans but for now I will leave you with the observation that of the projects that I have visited which are really bombing the amount of planning is always inversely proportionate to the levels of success. Like desperate sailors in sinking ships baling furiously with leaky buckets, plans and re-plans flash about the project, sometimes to the extent of little else being done at a management level.
I think the planning is the effect of imminent failure not the cause but I’m not sure.
Look out for the next blog in this series where I will be writing about alternatives to a single monolithic plan.
And finally, as many of you will be aware, I have two white papers on the Experian website that look at how to use data migration projects to kick start your data management initiative.
jmorris@iergo.com