A Testing Time

When timeframes are squashed and the prospect looms of having to release software that has been inadequately tested, there are still escape options.

User Acceptance Testing (UAT) is critical to the successful introduction of any new software system. Yet, repeatedly, the industry releases software into production that is unfit for purpose. The consequence of this ranges from the inconsequential to the catastrophic: mild embarrassment and loss of face, to public outcry and loss of life.

Despite the amount of time we invest in producing detailed UAT plans, actual events have a habit of ignoring our carefully constructed and logical strategies and throwing large spanners into the works. After all, the one real constant in planning is its inconstancy! But how does it all go wrong? Let's look at what can happen.

Between a Rock and a Hard Place

We postulate that developers must produce quality code in the first place - and so they should - but this doesn't take account of the environment in which most User Acceptance Testing takes place. Many critical events occur during UAT, but there are two extremely significant ones. They are:

The delivery date of testable software to the UAT team (the "Rock")
The release date for tested software to go to production (the "Hard Place")

Test strategies and plans incorporate the time window between these events. We identify, allocate, and organise resources against this window but, in practice, it has a nasty habit of shrinking.

Late software delivery may occur because the development team underestimates the scale, the degree of difficulty, or the technical complexity of the development. However, it is more common, especially in these days of Rapid Applications Development, for the definition of Software requirements to be deferred or changed belatedly.

This means at best some new code or at worst changes to, or complete rewrites of, existing code. Occurring at this late stage, there is a real probability of compromising the content of the planned release, the original planned delivery date to UAT - or both.

On the other hand, the release date into production is ostensibly carved in granite. Perhaps directors or process owners have made personal commitments or public pronouncements on the availability of the new software system.

Whatever the case, the senior management line - initially at least - is that everything must be delivered on the originally agreed date. Even if the detailed requirements subsequently change. Such insistence, in many cases, could be considered at best ostrich-like with head in the sand, and at worst King Canute-like in its disregard of reality.

Something's Gotta Give

I could blather on about best practice in UAT - and frequently do. But if it's too late to employ such techniques and the cave walls are relentlessly imploding on the imperilled UAT team in true Indiana Jones style, what are the viable last minute escape options?

Deliver LESS FUNCTIONALITY: Maintain the production release date but de-scoop the contents of the release and defer less critical functionality to later releases. Although not always possible, this option allows more time for developers and testers to get part of the new system into production - with a good chance of it being robust - whilst maintaining a potentially compromised release date.

Deliver LESS FULLY TESTED software: Reduce test coverage and concentrate on critical functions required by the majority of users of the new system.

The UAT team can always do more testing later so provided the essential functionality is tested and there are no system crashes - with or without workarounds - then a system can be released into production with a fair degree of confidence. Although not the best option, it does help maintain the required delivery date.

Deliver LATER: Delay the production release date sufficiently for satisfactory development and Acceptance Testing to take place. This is clearly the best option as far as software developers and testers are concerned. However, this is often a very difficult sell, and justifications must be well made.

Senior managers understand money and appreciate the risk of public failure. Help them by comparing the cost of releasing incomplete and under-tested software - and the attendant adverse publicity if it were to crash in spectacular style - with any loss of business or competitive edge resulting from a delivery slippage.

Seek HELP: When encountering an unexpected hurdle, the advice and support of someone not directly involved might make the difference between the next leap towards the finish line - or a fall.

So the moral of the story is to be prepared for testing strategies to be compromised and, unlike Indiana Jones, don't improvise: always have escape options up your sleeve.

Alex Blackie, Marlborough Stirling