However, even without tools, it is possible to improve testing by making simple, cheap changes based on good practice and common sense. The following article illustrates how this happened during testing on a particular project phase in which I was involved.

A while ago, I was invited to join the Hybrid System project to manage the testing of the next release. The Hybrid System was a client server, transaction processing system based on a Tandem mainframe and special purpose-built equipment (APTs) installed at post office counters.

The APTs were used to collect and verify information about customer transactions, such as bills paid. These transactions were uploaded by the mainframe which sorted all the transactions and sent information about them to each client. The information was also sent to the central finance systems, for further processing.

The Hybrid System was due to be phased out over the next couple of years and replaced by the ICL Pathway system. The new system would replace the office equipment and the mainframe system.

The previous Hybrid release had started the process by allowing transactions collected by Pathway equipment to be processed by the Tandem. The new release would put in place all the code to allow the Hybrid System to work with Pathway on a gradual transition of all functions until the Hybrid System could be switched off. There were four separate phases to full implementation, all to be tested in the one release.

The project team structure was fairly typical. The fifteen or so permanent and contract staff were split into six functional teams (analysts, mainframe developers, office developers, testers, operations and systems support, and admin support), each headed by a manager, responsible to the IT project manager.

I first met all the project team at a team building exercise, during the course of which I noticed that the testers did not seem to have a close relationship with the rest of the team members, and that the testers' inputs were often disregarded during the exercises.

The next day, I sat down with my new team. I discovered that the testing of the last phase had not gone too well. They had not really understood what the changes to the system had been about.

No one (especially the development team) seemed to have had time to help them and explain things, and they had never been completely sure which programs were in the test environment.

They also said that they did not know much about the Tandem and did not really understand how the systems hung together. I sensed a distinct lack of ownership of any of the problems.

None of the testers had come from a technical background and they were all clearly upset about the implications of an exercise to outsource the Tandem that had just been announced.

When I talked with the other managers, trying to find out what the working relationships were and how they might need to be improved, I got plenty of feedback!

The mainframe development manager felt that the testers did not understand his system! The testers took up far too much of his team's time, they needed so much help and support.

They were not testing the right things and, when they did find faults, they often were not really faults and, anyway, the testers did not give the developers enough information about what they had done. In all, he felt testing should come under development control.

On the plus side, he would be delighted if testers got involved at the high-level design stage.

The office development manager was kinder, but thought testing never got beyond the obvious. The analysts felt the testers tried hard, but maybe did not always get the support they needed. The operations manager said that the testers were 'OK', but did not stand up for themselves. She added that they did not seem to know what really happened in the live system.

I had already discovered that the project manager was dissatisfied with the testing of the previous phase and thought it could be done better because too many faults had been missed. The fault reporting system was paper-based and not held centrally, so there was no way of analysing past faults.

On the plus side, everyone on the project was used to reviews being held on almost every document that was produced, and the project manager always allowed time for them. There was a comprehensive set of project documentation that was kept up to date and under version control by the admin staff.

I had not expected so many people issues in a well-established project. Things needed sorting out and, as the next phase of this project was business critical and would be high-profile with real financial penalties for late delivery, so we could not afford to get it wrong!

I could do nothing much about the morale issues surrounding outsourcing; nor could I do much about the phase just ending.

It seemed to me that the only way the testers were going to get respect was to give them the knowledge to do their jobs better. I immediately got us all on a Tandem Basics course.

I then approached the mainframe development manager and asked him to give an overview of his part of the system and to tell us everything he thought we ought to know. He gave a really good overview at the right level, including much of the live running information. The office development manager was equally helpful.

During this period, I made it clear to my group that it was each person's responsibility to get as much as possible from the information being given and that I expected them to ask questions and not just sit there.

Neither would I accept criticism that there was too much information being provided, testers needed to understand as much as possible about the system to be able to test it properly and they would not get another chance.

Another issue to be addressed was the lack of a mechanised fault reporting system. I was not going to get budget to buy one. Some improvement work had been started on the old paper-based system and I was keen to get the team involved.

One benefit from the outsourcing review was that budget was available for staff to train for new work. One of my team wanted to get into PC development, so I was able to arrange for him to go on an Access course and then to build us a fault reporting system.

It was true win-win. He got a new skill and could demonstrate he had used it. We got a fault reporting system. The team learned that defining requirements is not easy!

I had also invited the development managers to get involved, I needed their support to supply fix information and agree a way of defining program versions. It also allowed me to take back control of what got put into the test system and when.

By this time, the requirements had been defined and we were ready to start specifying the tests All the team members had been on testing training courses, so I had imagined that, once they had a better understanding of the systems, my problems would be over. They were not!

I asked the testers for examples of test specifications they had produced previously. If the design and layouts that the testers had been using were adequate, I did not want to change it.

Unfortunately, the conditions were nowhere near specific enough and there were no expected outcomes. It was almost impossible to judge what had been covered. The test scripts barely justified the description, they were imprecise, often referred out to 'how to' documents and rarely gave expected results - most of the time if just said 'check results'.

So we went back to basics, starting with defining test conditions. The first and fundamental change was that every condition would have a clearly-defined expected outcome.

We looked at techniques we might use to increase our confidence that we had considered all the functionality to be tested. (Processes throughout the system were being changed as well as new ones being introduced).

This is where the table part of Cause-Effect Graphing came into its own. We had a limited number of types of variables (eg, transaction, media, clients, client 'owners', etc.), and each had a limited number of possibilities. not all applied to every business process. We were able to construct tables showing the possible combinations.

We also used narrative conditions, usually where we were looking at the system as a whole. This allowed a consideration of the flow of data through the system, from APT to 'client'.

Lastly, we looked at the navigation through the screens on the APT. Had we remembered them we could have used the state transition techniques, which would have made a better job of it.

We also included conditions to test for the various procedures that would be needed to move from one implementation stage to another. This was a bit difficult as the procedures had not yet been written!

Once identified, all the conditions were assembled into a document which developers, analysts and the Consignia customer were invited to review. Some of the constructs were new and needed to be explained to various participants.

The feedback from the review meeting was interesting. There was general surprise at how many tests had been identified. The customer and analyst realised they needed to do something about procedures. The common response was that it was really useful to see clear expected results (especially as our test analysis had identified some unexpected results!).

A set of scripting rules were agreed: scripts will be self-contained; they must be repeatable, each step will be numbered and will refer back to the condition(s) it is testing. Inputs will vary across clients (i.e. not all transactions will be for £5 for the same client).

All this was going along really well. A good working relationship was developing between the testers and the rest of the project team and, more importantly, my testers seemed to start to believe in themselves and take on issues rather than expect me to sort out every problem.

I then discovered that 'testing' had become a high risk project issue! This was because the project manager had no confidence that this ‘new’ way of testing was going to work or be completed in time.

Usually, test running would have started by now and she would have expected to see some plans and progress graphs. With all the other improvement activity, I had neglected one rather important area, that of keeping the project manager informed!

It was easy to put together a plan for when each script should be ready and when we would start and finish running it.

Once test running started, I needed to report progress. The previous phase had produced lots of little scripts, we had relatively few much longer scripts. Our first scripts were designed to exercise as many parts of the system as possible.

These early runs exposed quite a few faults (especially environmental ones) which had to be fixed and retested before we could move on. Reporting the number of scripts completed made progress look bad and did not match my 'gut feeling'.

I realised that what we should measure was testing conditions successfully completed. This produced a traditional S curve and seemed to satisfy the project manager. I also graphed the incidents raised, fixed and retested each week.

At the end of the testing, I felt that we had injected a much more professional approach to testing into the project. The testers seemed to have far more confidence in what they were doing and I felt they had earned, and were receiving, the respect of the rest of the team.

Finally, we must have been doing something right. The previous phase had found 41 faults during system test. Another 39 had been found during UAT and live running.

In our phase, we found 73 faults during system test (including one procedural one that, had it gone live, would have brought the system to its knees). No faults were found during the subsequent UAT, and only one fault during the first three months of live running.

Barbara Gorton, Consignia