Firstly though, the straw poll so from folks that have written in and those that have entered into dialogue in the comment boxes below have been interesting. The comments of those who have emailed me direct have a heavy preponderance of questions about structuring the relationship with suppliers. Those who have been more public have been more concerned with the relationship with our business colleagues.
Taking these in reverse order, David Lloyd-Williams expressed it eloquently when he wrote
“One of the biggest challenges - and keys to success - is getting the buy-in of the people in the business that "own" the data.”
And he and the other contributors to the discussion were right when they listed as natural outcomes of this difficulty - not getting the best staff volunteered to support your project; poor design and testing of mappings; unreasonable expectations of the quantity of (usually history) that needs to be migrated; underestimating the quality challenges; poor testing etc. (Thank you to Richard Bateman and Chris Hillman for their contributions - I hope I haven’t précised you remarks too imprecisely).
Well of course PDM has answers to all these issues, and they don’t involve more executive sponsorship. Executives, just like everyone else have an awful lot to do. Experienced middle managers - the ones we have to convince to work with us - know this only too well. The can nod and smile and say all the right things in the presence of the great and the good but once the spotlight of executive attention moves elsewhere they go back to their own priorities. And these priorities are usually not wrong insofar as the business is concerned. They are generally aligned to the routine goals of the enterprise. Unless you want to be forever knocking on the sponsor’s door and having a relationship with your business colleagues about as popular as the child catcher’s with the children in Chitty Chitty Bang Bang then there are better ways of going about it.
So I think there will be a session on Business Engagement, specifically in the Data Migration environment. I may even share the secret of The Three Motivations that govern everything we do at work. After this you will all be equipped to turn what was previously a problem into a golden opportunity to forge lasting (good) relations with your business colleagues.
Now onto the issue raised, as I may put it, in a more sotto voce tone - the delicate one of our relationship to our principle suppliers on a data migration project (or to put it the other way around, having worked both sides of the fence) the way we manage our relationship with, often, over expectant clients.
I am in two minds about this. The first is a fairly narrow look at the engagement contract - what are the pitfalls and ambiguities to avoid, on both sides. The second is to take a more general look at the ideal supplier relationship management scenario which should then be encapsulated in the contract as far as that goes and in the day to day management of the project for the rest. The trouble with the first is that I am no lawyer and therefore in no position to give legal advice, nor would I want to risk doing so. The second could get very large (but there again so could looking at Business Engagement).
Please let me know what you think by the usual means - the public pulpit of comments box or the confessional privacy of email.
Now onto Enablesoft and their Foxtrot product. Keyboard emulators and screen scrapers as they are known have been around a long while. I completed my first significant Data Migration using one back at the tail end of the last century when even Informatica was in its youth. Most IBM mainframe, green screen, 4270 terminal emulators had (and still have) a capability to re-run programmed key strokes incorporating data files so that batch loads and processing could be carried out whilst the operator was absent. Similarly, a great many companies still stitch together their business processes by extracting data by running the emulators and snatching data out of screen fields.
Of course the great benefit of keyboard emulators is that they put the data in through the front end and so go through all the validation of data entered by a human agent. The downside of them is that although they may enter data that hits the backend validated and correct from a technical point of view, badly written, or even well written but badly tested, they can run off over several unanticipated screens leaving a trail of confusion behind them until they finally crash.
We also, these days, have our Informaticas and Talends etc. which are so much quicker and highly featured, so why slow down the load process with software that can only run at a lowly speed measured in dozens of transactions a minute? Well everything it seems comes around again. We are often now reduced to a snail’s pace writing our data to our target by API’s that are poorly designed for mass insert on databases that we are not allowed to tweak with all the go faster tricks we used to use to get masses of data in quickly. So speed is becoming less of an advantage for the newer technology.
So where does Foxtrot sit in relation to these challenges?
Well first off it is nothing like a 4270 green screen emulator. Foxtrot expects to be working with web as well as other screen form types. It learns key and mouse strokes as you make them and so does not require extensive programming support (although the possibility of extending the scripts with conditional statements for error handling and more complex processing means that programming is possible). It will even seek out the right fields if there has been a change of screen format. It has built in screen checking so will error out if it finds itself on an unexpected screen. It has extended error processing (which will include an email option in the next release) that allows you to roll back the transaction you are engaged in (if that is possible, after all it is the target that determines when the data is committed). All this comes in a neat looking windows styled package.
Where would it fit in our armory though? Well, although we would probably want to have our main load programs written in a robust, industry standard ETL, I could see it being used for all those really awkward data situations, where putting data in through the front end was preferable to writing complex migration code. And often, although these edge cases are small in percentage terms, they can be large in actual numbers - 0.5% of a million Units of Migration is still 5000 UoM. Secondly for extracts of data for testing or data quality checking I could see this as being useful especially where we want some volume but have no easy, direct access to the data any other way.
Would we want it for our main migration tool? I’m not sure except where the budget was low and the migration relatively small scale.
On the other hand how about its use for the less technologically sophisticated to produce local applications that take some information from a web source and mix it with local data to deliver a business process? I understand that Foxtrot allows you to export the programmed deliverable for use within other applications although I was not shown how this worked and I do not know what the licensing arrangements are for this either. However in these days of tight IT budgets and access to programming resources limited, I can see it making its way into local IT solutions.
I will be getting my hands on the product in the next few weeks and will let you know how I get on with it. But it is a blast from the past, suitably brought up to date, that maybe we should look at again as part of our toolkit. This is especially the case at an entry price where a single use of it would pay for itself in saved programming resources.
About the author
John Morris has over 20 years experience in IT as a programmer, business analyst, project manager and data architect. He has spent the last 10 years working exclusively on data migration and system integration projects. John is the author of Practical Data Migration.