Using their pithily named Informatica Database and Enterprise Application Archiving Solution, they are pushing the notion of planning up front, the kind of access needed to legacy data in a data migration, and using components from their product set to manage and complete the task. As they suggest, in most cases, when it comes to data migration, less is more.
In other words, the more you migrate, the more data analysis you must perform, the more data quality issues you will uncover, etc. This is compounded in large legacy data stores because the older data items are quite likely to have been stored to different validation rules and different business structures. So the more you want to migrate, the proportionately more you have to pay per bit.
Using their Archiving Solution (and guys please come up with a shorter name) they suggest a number of possible solution scenarios.
They can provide transparent access through the same software front end to data stored at a different (presumably cheaper) location. I’m not sure how much utility there is in this from a data migration perspective. One of the cost savings of a new implementation is in the recovery of legacy licence costs; well, if you retain the same front end then that saving is not going to be made.
A second use of their software would be to turn the less-used data into a more space-efficient, compressed form. This is more like it. There is always tension between the technologists who would want to move, well, nothing if they could get away with it, and the business who would like years and years of history. Using other Informatica tools, plus by getting input from the business via the System Retirement Plans, a usage profile can be generated. If the use of very old data is only for the odd legal challenge or for a possible regulatory inspection, then archiving to near-online state is often more than adequate.
Because of the integration with other Informatica tools, like Informatica Data Discovery, access to the legacy data could be on the ad hoc basis that this kind of business need would require. The compressed format is also accessible via ODBC etc. interfaces, so I guess just about any reporting tool could access it.
A couple of issues come to mind here, but none of them show stoppers. For instance, if the data was originally in a COTS package, the data structures could be awesomely complicated. Extraction of required values for migration may well have been through proprietary API, and these will have disappeared along with the original software. So the archive data structure would need to be created at migration time. And this, of course, means that some of the Data Quality issues that you are trying to escape would still have to be tackled (potentially). Of course, validation could be relaxed considerably in this data set.
Data lineage issues would also have to be surmounted in the archive just as they are in the new target system. But this would be true whatever you do with the legacy archive.
However I applaud Informatica for once again showing that they have a growing understanding of the data migration space and thinking about integrated solutions. I also have a challenge for them:
Quite often, in a data migration project where data is being drawn from 100s of legacy systems, there is a requirement to create consolidated lists of certain reference data items. These may be all the product types and codes, for instance. These consolidated lists often cease to be of use after the project is over - the new target becomes the master. So, come on Informatica, explain how, with your growing presence in the Master Data Management space, we could utilise some of that technology and know-how to create temporary mini-MDM platforms for the project duration only.
As I await the reply to that one, I have to thank Informatica for pushing the boundary of the DMZ eastwards, so that it now covers part of Legacy Decommissioning. This occurs just as I am about to take delivery of a print run of A2-sized PDMv2 posters. Thanks a lot!
About the author
John Morris has over 20 years experience in IT as a programmer, business analyst, project manager and data architect. He has spent the last 10 years working exclusively on data migration and system integration projects. John is the author of Practical Data Migration.