Year 2000 testing approach
The strategy was formalised in mid 1997 and is predicated on the need to minimise the time that production software is impacted by Year 2000 remediation work. Regression testing and automated comparison of results forms the basis of the approach.
The requirement of Year 2000 compliance testing is to prove that the infrastructure, system software, application software and application data will operate correctly before, during, and after the change of century.
The process falls into four distinct stages:
It is essential to get a clear view of the chosen Year 2000 conversion strategy, the existing hardware and software infrastructure, the existing test environment and the opportunities for running a testing pilot.
The applications portfolio is divided into testable work units. These work units may not always equate to a complete business application for, amongst other reasons: dependencies between programs that are from different applications, programs that have not been converted and therefore do not require testing, and the requirement to test different processing cycles.
2. Present day testing
As a prerequisite, data conversion programs and bridging routines may require development and testing.
The first step in present day testing is to undertake a baseline test on the existing work unit, which is then repeated to establish the production compliance of the work unit after its conversion. The baseline test can also be used to recheck the work units when new versions of the underlying system software are installed or when the work unit is modified for any reason.
When the present day testing has been completed for a work unit, the work unit is implemented, as implementation is not dependent on the completion of future date testing or integration testing. Returning the work unit to production at this point allows minimisation of any change freeze period imposed and gives less chance of overlap with other concurrent changes.
Whilst the change has been tested to show that it has not changed the way in which the work unit currently performs, it should be noted that any work unit implemented at this point might still contain millennium-related problems.
Although the work unit will have been modified to be Year 2000 compliant, it has not yet been tested for millennium compliance. There may be the later need to redo present day testing and re-implement any work unit corrections subsequently identified during the future date testing and final testing.
3. Future date testing
A complete time shift is essential to successfully simulate a future date test environment. Tools will be required to "time-warp" all dates required for future date tests. These tools need to be acquired or developed, and tested before any future date testing can take place. There is also the need to flexibly alter the system date for the platform under test, either through a dedicated test environment or the use of system date simulation tools.
There may be a need to perform a baseline test before future date testing can take place when:
- 'expected results' need to be produced for new applications
- there has been a lengthy time gap since the present day tests
- there has been enhancement to the work unit since present day testing
- the baseline test needs to be aligned around comparable dates required in future date testing, such as particular period end dates.
There will be a need in each work unit to define the critical date periods for future date testing. For example some applications are only forward looking so are at most danger of producing noncompliant results in the runs before the Year 2000. Other applications are only backwards referencing so are at most danger of producing non-compliant results in any post Year 2000 runs.
Alongside regression testing there will be a need to define and carry out specific future dated test cases to cheek particular amended logic in the converted work unit; for example in error handling based on date processing.
The future date tests are even more reliant on the requirement to be repeatable than the present day regression tests and are likely candidates for test automation. They have to be repeated after any conversions of the work units or the underlying system software and, for every test round, they have to be run against a number of future dates.
As a result of successfully completing present day tests and future date tests, the work unit can now be considered Year 2000 compliant as it matches the requirements to process date values before, into and after the Year 2000.
4. Final testing
Once individual work units have been tested and found to be compliant, they can be tested "end to end" within an integration test. If possible this should be done using the compliant infrastructure so that system controlled functions can be included in this testing.
Many platforms are not yet millennium compliant but the infrastructure needs to be made compliant well before 31 December 1999. This includes updating and replacing non-compliant hardware, updating the operating system and other system software to compliant versions, and then testing the infrastructure to prove that it is compliant both in itself and in relation to the applications.
The test infrastructure will change from stage to stage of the Year 2000 test cycle and, throughout the test timeframe. It is essential that the Year 2000 team should work closely with the applications support group to co‑ordinate the changes.
Validate full test cycle
Because conversion approach strategies are likely to be repeated many times, it is imperative to verify the technical approach taken by processing pilot conversions through the full test cycle as soon as possible. To achieve this, activities such as the set-up of a future date testing environment and obtaining date ageing tools need to be on the critical path.
Cost benefit analysis
The Year 2000 is an absolute deadline and the majority of projects have to work within cost restraints. This means that there may not be either the time or resource available to carry out full compliance testing on all applications. Cost/benefit or risk assessment techniques should therefore be used to target the available effort on applications where failure would have the most impact on the organisation.
The results can be applied in a number of areas: choosing not to test an application or parts of an application at all, only present day testing an application or parts of an application, or limiting the range of future dates tested. For best effect these techniques should be applied to the application portfolio from the start of the project, rather than as a panic measure at the end, when time or money is actually running out.
Following agreement of the test strategy it will be advisable to create a detailed set of testing guidelines. This could be incorporated into an overall technical conversion process manual.
The detailed test guidelines will be able to specify the approach to activities common to the testing of all work units. For example this may include: detailed testing lifecycle and work breakdown structure, test environment standards and naming conventions, standards for test plans, cases and data, and standards for the recording, analysis and approval of test results.
Any guidelines will need to be clearly marked as mandatory and prescriptive where the use of any other conversion or testing technique would result in a breakdown in the overall conversion program. However there is a need to be able to react flexibly to the introduction of new or improved best practices.
While the test strategy is being agreed and the work unit analysis is taking place, the resource plans need to be developed. These include identifying:
- the hardware to be used - possibly an additional machine or a partition on a current machine, requirements for CPU, disk, tape, printer, on‑line and communications, and agreed testing schedules based on machine availability
- software requirements for the testing environment - operating and system software, test tools and licence implications of future date testing
- personnel requirements - testing team resourcing, user involvement, operations involvement, other parties (internal audit, financial compliance regulations and so on).
Test management provides the control mechanism and metrics to ensure adherence to plan, or to allow adjustments to be made in the light of experience. Test management responsibilities include environment availability, resource availability, integration with the conversion process, dependencies on the business, test automation, test data management, the monitoring of error detection rates, test quality and testing effectiveness, incident reporting, error management, status tracking and progress reporting.
Tools and techniques to manage and execute test cases and to cheek results can have a major impact on the whole testing process. Implementing a suitable test automation regime means there is an extra deliverable for reuse after the Year 2000 activities are completed
Most application conversions will be ready for testing in the same period, about mid 1998 in order to be completed for the yearend 1998. The time slots for using both human and system resources for testing will therefore be very limited. Automation provides the opportunity to prepare and exercise the tests at an earlier time to ensure they are ready to be executed efficiently in the expected peak period.
CMG has a unique approach for test automation, CMG:CAST, which is used successfully in a large number of projects, among which are several Year 2000 projects. The CMG:CAST concept includes facilities to aid in future date generation and the automation of time travel.
It is recommended the test cases cover the essential application functions to be sure the applications will be able to pass the millennium change without major problems. Test cases should be built to cover both correct application processing and suitable error conditions. For the required volume of test cases, suitable time must be made available to define the cases, undertake the testing and correct any errors.
When doing time travel for future date testing, it is necessary to age all of the test data used in the test cases and in the common data referenced in the underlying test databases. The test sets can be explicitly developed by analysts or be generated by extracting or copying production databases.
Development by hand means more concise tests, which can be more easily repeated and their results more easily interpreted. Production data extraction means less work to develop the cases and a better confidence that real‑life variations are achieved but is difficult to manage and more costly on technical resources.
The best approach in most Year 2000 projects will be a combination of both strategies, with choices depending on availability of personnel, application understanding, available time and available technical resource.
Selection of suitable testing and test support tools is critical to meeting the tight deadlines in the programme and should be undertaken as soon as is feasible to allow time for procurement, implementation and training.
David Hobbs, CMG UK Ltd