The story underlines the value of good test preparation, and shows that a structured and systematic approach is the ally not the enemy of pragmatic testing.

The project in question was one of acute urgency, both in terms of the business activities involved and, consequently, the demands placed on the systems development team. Time did not permit the orderly definition of requirements followed by design, code and test phases.

Requirements were defined piecemeal and were almost invariably required "tomorrow". The usual approach was to define a requirement one day, code and test it the next, and implement it for the following day. Unit Testing of each change was performed (in isolation), but there was no time for any coherent, system-level testing.

Nonetheless, a cycle of system-level functional testing was planned, albeit after the code had been put into production. I set about preparing the test plan following my standard approach.

Put simply, this involves defining Test Conditions (statements of what the system should or should not do), determining Test Steps (details of how the Test Conditions will be proved) and writing a Test Plan (the "battling order" in which the various test actions will be performed).

When I had prepared the Test Conditions and Test Steps, and had almost finished documenting the Test Plan, I sat down with the Project Manager to discuss the execution of the testing in detail. I estimated that the test execution would take two weeks (starting in a week's time), and advised him that at least one user and one developer would be needed on a more or less full time basis.

This caused a problem. Not only would the two weeks take us virtually to the end of the project (after which much of the system functionality would no longer be required), but there was not a snowball in hell's chance of diverting any user resources from the critical business activities on which the company's survival depended. Testing, as I had envisaged it, was simply not going to happen.

I was reluctant, however, simply to walk away with nothing gained but another sorry tale of testing scuppered by business imperatives. An alternative suggested itself. Since the system was already in production, why not just ask the users if the Test Conditions were true?

This "anecdotal testing" would provide at least some information about the quality of the system. As regards the many management information reports which the system produced, checking layout and sort-order Test Conditions could be done as well if not better from production copies as from test ones. Above all, it would only take a couple of days and could start immediately.

I therefore interviewed two users (one from the administration area and one whose interest was in management information) and two developers (an applications chap and a "tecki"). I went through the Test Conditions with each of them separately, asking whether each Condition was "true", "false" or "don’t know".

I then collated the results, resolved the inevitable discrepancies, and produced a formal Sign Off Report summarising the test results. Interestingly enough, the summary figures (77.2% of Test Conditions "true"; 12.4% "false"; 10.4% "don't know") are roughly in line with what I would expect from a "prophet" cycle of test execution for an equivalently sized system.

I was thus able to provide some measurement of the system's quality, to identify some specific faults, and to document some low-level requirements which had been omitted, without actually executing the tests I had intended to. Obviously, Test Conditions which stated that "a report will contain all such-and-such data" could not be checked with scientific precision.

The test results had to rely on the developer's statement (and the users' belief) that the data was correctly extracted. However, we know that the majority of faults in production systems can be attributed to requirement errors rather than to developer mistakes.

Given that the main developer was both competent and experienced, system errors were much more likely to arise because the wrong data had been requested than because the required data was not being extracted from the database. (A subsequent project panic confirmed that this was indeed the case!). The "anecdotal" approach was a perfectly reasonable way of "looking for errors in a likely place".

I certainly do not suggest that this approach should be adopted as a standard. I would be extremely reluctant ever to propose it when defining a testing strategy.

Nonetheless, I think it is a useful additional weapon in the tester's armoury, suitable for defensive use where project timescales would otherwise dictate the abandonment of testing (though perhaps one best kept secret until it is needed).

Chris Allen