Although there is some evidence that users are having more success with automated testing tools than was the case a year or two ago, the majority of users still report difficulties in using them effectively.

These difficulties arise not out of inherent flaws in the tools, but because of a mismatch between the process that the tools support and the process that the users need to follow.

The majority of tool vendors pay little regard to the process which their customers will need to follow, apparently regarding it as either obvious or unimportant. The minority who do address the issue explicitly suggest a process which fits snugly around their product but lacks a vision of the wider objectives of system development projects.

The system development process must contain the following steps within its configuration: requirements analysis, specification, design, coding, several levels of testing, and maintenance and production.

This is true whether the project is developing a new application or is enhancing an existing one, whether it is to be implemented in a third or fourth generation language, whether it is using a classical approach or a RAD-like approach, and whatever its field of application. It is just the relative size of these steps which varies.

Ideally we want to check every step as we take it to avoid expensive re-work later. As the purpose of testing is to give us confidence that our applications are fit for their purpose, and to show us how we can improve them, I regard the validation of all the steps in the production process as part of testing.

If we look at all the activities which we could undertake in order to test or validate the results of each stage in the production process, we have a very long list. It is clear that the current generation of testing tools leave many of these activities unsupported. Of course we do not have enough time or resources to conduct all these activities on every project.

So we adopt a process which includes a selection of them, and which will give us an acceptable level of confidence and feedback, within our resource constraints. Or I should say that we hope we will be in this happy position!

When we look at the process that the testing tool vendors recommend, we find that they are too limited in their scope, they are too vague, they are too idealistic (for example not catering for the prioritisation of errors for deferred fixing in subsequent versions of the application), they fail to support multiple versions of an application, they do not place controls on the modification of tests to match the observed behaviour of the code, and in some cases they do not even recognise that program development is iterative.

However, it is also clear that nobody, including the theorists who proposed model processes in software engineering, have captured the full intricacy of system development. The vendor’s processes all contain some nuggets of wisdom which we can usefully adopt in defining our own processes.

However, when we do this we find that we have a process which does not match any commercial tool, and hence we return to the problem that I mentioned at the start of this article.

During the course of our research we have found that the commercially available tools do not provide an even level of support across the testing activities. We found that half the toolsets provided no support for test planning and management – surely an essential requirement for adoption on a large project.

Only one tool generated tests from the application requirements. Two thirds of tools gave little support with selecting or building test cases from any source (code, design models, or documentation).

Almost the same number of tools gave no feedback about the effectiveness of, or coverage achieved by, a set of test cases. Tools supporting simulation of missing application components, or generating test harnesses for tests, are even more scarce.

By contrast the fields of capture and replay (useful for regression testing), automated test execution, and load testing allow you plenty of choice of supplier, so long as you are working with a fashionable architecture, operating system and development environment. Static code analysis is well supported by the minority of suppliers in this specialist field.

On the subject of operating systems, we found that at the moment there is most support for Unix based systems, although Windows NT and 95 are catching up fast, while support for OS/2 is falling.

The technology of automated software testing has come a long way in a short time, but we still have much further to go in learning how to use it effectively.

Graham Titterington, Editor: Ovum Evaluates Software Testing Tools