Two customers, 150 miles apart, both buy the base product to build customised versions of the application for live implementation in July 1997. The projects will provide significant enhancements to the services these institutions offer their customers as well as reducing administration costs in the branches.
A new team is created to test both applications at the same time on the customer sites. The resources of the test team are limited to 4 people with the possible addition of extra resources seconded from the design team or from the customer user community during the test execution process.
To add to the fun, 3 more customers are considering buying the product for implementation later this year.
- Panic! Most projects involve a period of intense panic and depression so get it over with early on - it will cause less disruption if done early in the project rather than late.
- Develop a single but flexible strategy to cover all testing from unit to user. Use on both projects.
- Automate everything that can be automated. Time and resources are strictly rationed so let a robot do the donkey work. This should also facilitate constant regression testing from the earliest stage possible.
- Adopt an incremental approach to test design. As development is being done with an OO approach and built into functions, let the testing follow the same approach. Test micro, combine together and test macro. Maximise test reusability and play the developers at their own game.
- Involve the designers and developers by making them review test plans, scripts and results. They outnumber the test team by 3:1 so ‘borrow’ their expertise to redress the balance.
- Ensure that test management is effective by using a fault database and progress system that is dynamically linked to the execution tool. Save time and improve communications between the development and test teams.
- Deploy the test team, buy the mobile phones and laptops and go forth to make the plan work.
The story so far
The panic has been completed successfully and no further outbreak appears likely - yet. The strategy has been completed and both customers have bought in enthusiastically. The co-operation and commitment of all participants has been obtained (and documented in blood!).
Automation will be used and the test tools have been installed to our requirements. Testing at all levels, from unit to user, will be automated and training for all participants has started.
The test team resources have been split so that 1 member supervises the automation of unit testing, 1 member controls the integration and system testing for each project and one person manages the whole process and helps out wherever necessary.
Test requirements and scripts are being built at object level so that the construction of integration and system test scripts becomes a simple exercise of combining pre-built object scripts. This is saving time while ensuring that the full functionality of the systems is fully exercised.
An automated test management process has been introduced. Faults will be raised as they are identified with full diagnostic being automatically captured.
This should prevent the traditional contention between development and test teams as to exactly what the test analyst did in which order to produce the unexpected result that the developer refused to believe actually happened.
So far, so good. The plan is holding up, we are on schedule and feel confident. There are bound to be problems along the way but at least we know where we are going and how we plan to get there.
Mark Sutherst & Lesley Smith, UK Banking Systems - Unisys Ltd.