Seems like I have been doing quite a bit of thinking and talking about Data Migration testing over the last week or so.

Firstly I was talking to Phil Howard of Bloor research the other day and the subject of Data Masking and Test Data Management came up. Then, as last week's blog showed, I’ve been engaged in a bit of thinking about SAP implementations and the limitations of the ASAP method recently too - and again testing (at least as it is portrayed by various vendors) was one of the issues I want to address.

First things first - where does testing sit in the PDMv2?

Well, those of you who have already had a copy of the poster will know that Test Build & Execution (TBE) is a sub-module within Migration Design & Execution (MDE). However a quick check on Amazon.com reveals 642 titles related to software testing.

It was not my intention that PDM add a 643rd.

There is plenty of expertise out there, colleagues who have made a career out of testing. I am not going to belittle their skills. So does PDM black box testing then? And if it does, how can it make its grand claims to zero defect migrations?

Answering this is a great example of the integrated nature of PDM. So let’s start at the beginning.

Within the Project Initiation phase of PDM located in the Migration Strategy and Governance (MSG) module there is an innocuous deliverable within Strategy called Policies. Now Policies are some of the most important items that you need to get a grip on. Policies are all those well articulated and tacit business drivers that we have to take cognisance of as we deliver the migration.

Those of you who have been on the courses will know that this is a wide ranging list that includes Programme Governance Policies (Prince etc.), regulatory policies (Data Protection, Security etc.) and also Testing Policies. It is normal, these days, in all but the smallest of IT shops, for there to be a preferred testing strategy, or at least a software test function to open discussions with. They need to be engaged early. Quite often test teams and test rigs are booked months in advance. As a project you need to make sure that you understand their timetable as much as anything else.

There is also the issue of the subordinate nature of Data Migration. It is rare that Data Migration is performed for its own sake. Normally it is part of a bigger programme. The bigger programme will have its own testing requirements with which Data Migration testing has to fit. There is, of course some overlap. PDM recognises this, for instance by including User Acceptance Testing (UAT) as part of the System Retirement Plans. Some of this UAT is specific to the migration - are the data lineage requirements being met for instance?

Others - has the Business Owner been given sufficient reassurance that the new system will support their needs so that they can sign the decommissioning certificate - crosses the line between new system design and Data Migration. The new business process design has nothing to do with the Data Migration work stream but the selection, extraction, qualifying, reformatting and loading of legacy data into that process is part of the Data Migration work stream. When the users see the new system does it hold the data that will allow them to perform their jobs? Can they be reassured that all the data items in the legacy made it to the Target? Is that provable, perhaps to an external auditor (if that is a business requirement)?

Again within PDM, each module contributes to ensuring the quality of the data migrated. The Data Quality Rules (DQR) process starts resolving data issues in the Landscape Analysis module, which can be kicked off prior to the target being designed or even selected. The DQR are maintained, extended and monitored throughout Gap Analysis and Mapping, Migration Design and Execution and Legacy Decommissioning. When the data is committed for migration there should be no doubts about the items that will successful migrate and the ones that will fall out.

As an aside, allowing records to fall out during migration is a perfectly acceptable way of handling data issues. Sometimes it is impossible to fix in source for technical (usually data structure) reasons, extremely expensive and complicated to fix in flight, therefore it is better to error out and load manually later. But you have to know what will fall out, how many, why and what you are going to do with them. Throwing data at the Target to see what sticks is not the same as a planned fall out as part of a DQR.

In an ideal world, we would all be using integrated software packages that reused the rules uncovered in profiling in the data quality tool which in turn performed the front end validation logic for the migration controller. Then, if we ensure via the DQR process that the complete business and technical validation requirements are captured in the data quality tool, which we have been using to monitor data readiness, come the day of the migration, we will be confident that all the data will migrate (given the caveat above about planned fallout).

We, of course, also look for the added reassurance that all our analysis, design and build hasn’t slipped up anywhere which is where formal testing comes into it but within PDM there are more backstops than a successful test run to make sure the migration will work. Each module contributes in an integrated way to migration success.

There are things that only formal testing will reassure us of however. Firstly there are the physical linkages and the none-functional elements like speed, capacity, organisation etc. Secondly there are questions of whether our analysis, design and build really have captured all the issues (and not inadvertently created others).

All of which is a long way round to addressing Philip’s questions. When it comes to Data Masking - this is the testing technique (sometimes known as Anonymising) of taking live data but amending it so that it can be used by the test team without fear of real customer data leaking out of the company. This is especially important given data security issues, especially in off-shoring development outside of Europe.

Without going over old ground there are legal difficulties in personal data from within Europe being exported to countries that do not have the same legal protections. One way round this is to mask the elements that would allow identification of the data subject (name some address details etc.). Of course you then end up with data that may be conformant when masked but not when unmasked (I’m thinking of structured address data).

Where PDMv2 helps, in this case, aside from the multiple layers of data quality assurance that are illustrated above, is that it will have included from the beginning the policies on data security that will underpin the very earliest testing strategy conversations. It won’t be a late in the discovery that using live data outside of the control of the enterprise is a problem. If PDM has been used from the outset these restrictions will have been part of the tender documents by which the Systems Integrator would have been selected and their respective testing capabilities assessed.

On the issue of Test Data Management which concerns itself with fabricating data fro test purposes rather than taking live data as the basis - the comprehensive and integrated nature of the DQR, GAM and MDE modules means that we can give our test experts a comprehensive list of DQR against which the incoming data must be tested. PDMv2 is metadata rich. Used in its purest form there will be multiple levels of Data Model to inform the structure of test data. This allows the test team to build their test data with all the appropriate flaws that need to be tested for not the ‘Perfect flight path’ data that is often produced for test rigs.

To come back to the other thoughts that prompted this week’s blog, there appears to be an increasing reliance on full load testing in lieu of the earlier detection of data issues that PDM is built around. I have a number of issues with this.

As we have seen when it comes to data masking, even where the testing is being carried out in-house, there can be restrictions on the use of pure legacy data because of data security issues but as soon as you mask then you are no longer using the real data, so issues can be hidden that only re-appear in the live run. An example from the UK would be postcode data. If you are masking identity then obviously the postcodes -which can uniquely identify a client - would need to be masked. However it is likely that the masking would create ‘dummy’ postcodes that would pass testing, so any invalid postcodes get masked away.

There are also often issues of scale. Does your client have the spare server capacity to take a full cut of the live system, run it through a full copy of the migration suite and load it onto a fully working version of the target? Anything short of that and you risk only finding out the truth on go live.

Even if those two are not a problem there is the issue of timing. To be realistic the test run has to be against a working version of the target. This means completing development but waiting months with all the consequent licence and development staff costs whilst the first few cycles of test load run through spitting out maybe half of the legacy records. And this is if you are lucky. I’ve seen many migrations where on the first cycle zero records migrated. This was not down to connection issues - the run was not abandoned before it had properly started - it was down to data gaps and validation failures.

This issue is exacerbated when the recursive nature of this kind of approach is considered. One data issue can be masking a second which masks a third and so on. The first migration cycle uncovers the first set of issues. These are fixed - then all the legacy records fail to load again. I was called in to look at a migration where after the fifth cycle zero records had migrated but more and more issues were appearing that made the sixth, seventh and eighth cycle just as unlikely to be successful.

So you either allow a considerable elapsed time in your plan after the main work of the programme is finished and before go live or you risk the almost certain failure of the programme to hit its deadlines.

But this test against the whole data set approach to data migration is one that I am seeing more and more in proposals as the recommended mainstay of the testing strategy.

Now I don’t want to give the impression that performing full trial migrations is a bad thing. On the contrary they are an excellent way of proving that the migration is going to work and removing niggles (I was told about a trial migration recently that had to be aborted because on one of the coldest weekends of the year no one had realised that the office heating was turned off automatically at 7:00pm on a Friday and no one had the number of the janitor who knew how to turn it back on again.

After struggling against sub-zero temperatures it was agreed that typing with gloves on just wasn’t going to hack it and the risk of hypothermia was growing too real). But it should be niggles. After your initial link test, the first migration cycles should flow through with minimum defects. Not data spilling out all over the place - that reassures no one.

And for those of you who still have a blank space on the wall that could be filled by a full colour PDMv2 poster (and possibly demystify some of the above), please drop me a line to the email account below and we will gladly forward one on to you.

Johny Morris
jmorris@pdmigration.com