It was, I suggested, useful, but limited to the normal boundaries of most Systems Integrator (SI) centric methods - fine for data management once it crossed the de-militarised zone (DMZ) into the purview of the SI but silent on the mass of activity outside the DMZ where the selection, pre-processing, data quality resolution and legacy system retirement takes place. And then along comes a white paper from Utopia Inc ‘Data Migration Management. A Methodology’ sponsored by SAP themselves no less.
I’m sure the two events are a coincidence.
So, how far does Utopia demonstrate an understanding of the full scope of Data Migration beyond the limits of the DMZ? Here I am frustrated by hints in their literature... but more of that anon once we have looked at what is good in what they have to say.
As might be supposed from a paper closely tied to SAP, their methodology is tightly modelled on the phases within ASAP - Project Preparation, Business Blueprint, Realisation, Final Preparation, Go Live and Support. And quite right too. As they say, stick to a process model and a set of semantics that is understood by the ecology you work most closely with. As they say Data Migration has a common set of requirements whether you are migrating to SAP or to any other application and their approach would be just as applicable, but ASAP is the model they are using.
The paper opens well enough with a good understanding of why and how Data Migration is such a problem for most implementations. They make a great case for moving it up the priority list. They also recommend legacy profiling activities within the Project Preparation phase which is excellent. However they do not mention issues like Legacy Data Store selection clearly enough within Preparation.
As we all know, in modern, post client-server environments, there are often multiple potential sources for data items. How do we get to choose the most appropriate? This is where my frustration starts to creep in. It is obvious that the author(s) of the paper are experienced. They must have faced this problem. Giving it a mention would have been helpful. Providing a mechanism for solving it would have been even better.
To be honest that is how the paper continues to read. The Project Preparation phase ends with a cut over strategy, a cleansing strategy and a conversion approach but explaining what goes into them is fairly light. A cut over strategy has to be cognoscente of more that just the output of data profiling. It needs to take account of local policies on parallel running, project structures, data security, data management strategies etc. It has to understand business constraints on timing.
It has to take account of the training lag. It has to have a plan for archiving data that will not be migrated to the new system but which is needed for ongoing business processes. It has to understand the decommissioning of Legacy Systems. It has to cover business requirements for data audit and data lineage. It has to understand the bigger programme tolerances in the quality / budget / time triangle. It needs fallback and checkpoint planning. And so on.
Maybe I’m being unfair, a white paper has to be an overview and my list above is not exhaustive because this is a blog but as we see when we get to the sections on data validation in the Realisation phase, Utopia will expend 150 words explaining five different types of validation but when it comes to resolution of data issues we have 14 words ‘... failed records are provided to the data and system stakeholders for review and remediation’.
As those have read my blog before will know this is the point at which I like to ask ‘And then what do you do?’ The response should not be ‘And then we wait.’ When that is the answer, in my minds eye, I see the tumble weed blowing across the project office as the project sits on its hands waiting for the remediation to happen, somewhere else, by someone else, sometime soon, we hope. Remediation can be long and complex. Few projects fail because the ETL technology fails. Most fail because remediation does not happen quickly enough or thoroughly enough.
All of this is par for the course for an SI centric data migration methodology. Plenty of detail on the aspects within the DMZ that are closest to their role, lots of emphasis on technology but minimal comment on the business side of the problem. Reading between the lines however I suspect Utopia have more in their kit bag than they are showing us.
The Data Migration Methodology is part of Utopia’s Enterprise Data Life Cycle Management framework which includes ‘Business processes, governance practices and applied technologies’. For instance they almost casually mention workshops for issues without explaining how they work and have different classes of data stakeholders without fully explaining their roles. They also speak with an overall authority that suggests they have a better understanding of the relationship between the technical and business streams of activity than this paper gives us.
So overall does it silence my criticisms of ASAP? No it doesn’t. There is too little on business engagement. Even the technical detail is strangely uneven - there is no clarity on the difference between records that fail migration and units of migration that fail for instance. There is no mention of fallback management. There is no legacy system retirement planning. They recognise the issue of latency (the time gap between taking data out of a live system, it resting in a staging area and going live in the target during which time changes are made in the legacy) but they don’t address the transitional processing rules needed to keep systems in step.
Indeed the whole approach speaks to a last generation technology - with leading edge bi-directional synchronisation engines we don’t need staging areas. And so on. My advice would be to read it with one eye on the PDMv2 wall chart and see where their DMZ lies. On the other hand, as a well articulated definition of activity within the DMZ it is fine. With the DMZ so clearly defined then around it you can graft on those elements of PDMv2 that make up the other 50% or more of your project. Then you will see, I think, that there is an excellent fit between PDMv2 and these ASAP derived approaches.
If, of course, you don’t happen to have a copy of the said wall chart to hand then drop me an email at the address below and I will get one sent out to you.
Johny Morris
jmorris@pdmigration.com