Standards for software testing
What use are standards to you? First, we must qualify the question as there is a consumer viewpoint and a producer viewpoint. As a consumer, standards affect our everyday lives and are generally considered a good thing.
For instance, a standard that governs the quality of pushchairs generally meets with public approval as it is presumably safeguarding our children. As such, the standard acts as a form of guarantee to the consumer that the product is of a certain quality.
The majority of consumers have no idea what a pushchair standard might contain, but trust its authors to know what they are writing about.
Initially we might expect the standard to contain requirements that make a pushchair as safe as possible (by using best practice), but after a moment's reflection we will probably modify our view to expect a reasonable balance between safety and cost (good practice). After all, we don't want to make the price prohibitive.
So, to the consumer, standards are generally useful, their authors providing the expertise to a transaction that would otherwise be lacking.
Now, what about the manufacturer of pushchairs? They have a different perspective. They benefit from complying with the standard as they are then presumably making 'good enough' pushchairs and thereby are more likely to avoid the dual pitfalls of bad publicity and legal liability from selling 'unsafe' products.
Following the marketing theme, then if the pushchair standard was not mandatory, then those manufacturers complying with it would be able to use their compliance to market their products favourably compared with non-compliant competitors.
Consider, finally, the manufacturer new to pushchairs. The existence of a standard detailing good practice in pushchair manufacture means that they do not have to start from scratch, but can build on the experience of the standard’s authors.
Unhappily, there is no single software testing standard in the way that a single pushchair standard has been assumed here. Consumers of software testing services cannot simply look for the 'kite-mark' and testers have no single source of good practice.
So, first of all, are there standards relevant to software testing? Absolutely, but there are some important areas, such as integration testing, where no useful standard exists at all.
Next, are these standards useful to us as software testers? Some of them are useful and some are not. They are of widely-varying quality, so it is difficult to know which ones are worth reading.
What you can say, however, is that informed use of standards should improve your effectiveness as a software tester. The remainder of this article attempts to provide a brief introduction to which standards cover software testing and how - and then give an idea of their usefulness.
Two ways are used here to identify standards that include software testing. First standards that mandate testing as part of a larger requirement are considered. These should be of most use to those who want to be able to state compliance with a standard, such as ISO 9000.
Next standards that actually support parts of software testing are covered. These should be of more use to those testers who are actually performing testing - and who do not want to 're-invent the wheel' each time they are given a new area of responsibility. These approaches to identifying standards can be considered analogous to the black box and white box approaches to test case designs.
Software testing is defined in BS 7925-1 as the "process of exercising software to verify that it satisfies specified requirements and to detect errors". As such, software testing is one way of performing both software verification and software validation - static techniques, such as reviews, being another.
Obviously, verification and validation are not performed as stand-alone processes - there has to be something to verify and validate. The verification and validation processes form part of the larger process of software engineering.
Thus, from a process viewpoint, software testing, as part of verification and validation can be viewed as being included within software engineering, which, in turn, is part of systems engineering.
In fact, the systems engineering, the software engineering and the verification and validation processes are all covered by corresponding standards (ISO 15288, ISO 12207 and IEEE 1012 respectively). Each of these standards contains requirements relevant to the software tester.
Both ISO 15288 and ISO 12207 include processes for verification and validation, and although many software developers and testers ignore the systems aspect of their work, it is impossible to deny the relevance of ISO 12207, the software life cycle processes standard.
ISO 12207 is a standard that defines a framework for software throughout its life cycle, and, unlike ISO 9000, has been quickly accepted in the US - it has now been accepted as the 'umbrella', or integrating standard by the IEEE for their complete set of software engineering standards.
The test strategy, a high level document defining the test phases to be performed for a programme (one or more projects), is most likely to be influenced by the above standards.
ISO 9000-3, which provides guidance on the application of ISO 9001, suggests that unit, integration, system and acceptance testing be considered, basing the extent of testing on the complexity of the product and the risks.
IEEE 1012 defines in detail the specific verification and validation processes, activities to be performed, based on the concept of integrity levels, so will determine which test phases are applied.
Quality provides a different perspective from which to view software testing. From a quality perspective, testing, as part of verification and validation, can be seen as an integral part of software quality assurance.
If software is part of a larger system, then software testing can also be considered as part of overall quality management and assurance. As with the process model, the higher levels are covered well by corresponding standards (e.g. ISO 9000, IEEE 730 and IEEE 1012, respectively).
Not many software developers will be ignorant of ISO 9000, but it considers testing at such high level that non-compliance would basically mean performing no documented testing at all. IEEE 730 is similarly high level and offers little extra value to the tester.
A third view can be considered where, unusually, software testing does have a corresponding lower level standard. This represents the terminology perspective. A common set of terminology should help ensure efficient communication between all those concerned with software testing.
Natural language, as spoken in our daily lives, is at the highest level, while computing terms and software engineering terms lead eventually to software testing terms. Standards are available for each level of this model, for example, starting with the Oxford English Dictionary, leading onto IEEE 610, IEEE 610.12 and finally onto BS 7925-1, the software testing vocabulary.
The shortcoming of BS 7925-1 is that it is somewhat biased towards component testing. It originated as the definitions section of BS 7952-2 and so was initially purely devoted to component testing, but has since been expanded to cover software testing in general. Further work needs to be done.
From these three views, a number of standards relevant to testing have been identified, albeit that many of them only consider software testing from a very high level. Some, such as ISO 9000, offer little of use to the software test practitioner apart from in terms of compliance and marketing.
Of the others, ISO 12207 is expected to have a large impact, and compliance with this standard is expected to become the usual state of affairs for software developers and testers.
IEEE 1012, the software verification and validation standard, is highly-relevant to software testers and tells us which activities to perform dependent on the integrity level of the software under test (ISO 15026 defines the process for determining integrity levels based on risk analysis, which is defined in IEC 60300-3-9 – so IEEE 1012 is a definite help if performing risk-based testing).
Now let us consider software testing from the inside to identify more standards that will help us with the different aspects of software testing.
The test plan defines the test phases to be performed and the testing within those phases for a particular project. Its content will be aligned with the test strategy, but any differences will be highlighted and explained in this document.
The phase test plan provides the detailed requirements for performing testing within a phase e.g. component test plan, integration test plan. ISO 9000-3 suggests a brief list of contents for test plans, while IEEE 829 provides a comprehensive set of requirements for test planning documentation (as well as test specification and test reporting documentation).
More relevant for the unit/component testing phase, BS 7925-2 defines the detailed content of a software component test plan and provides an example set of documentation. BS 7925-2 also defines a generic component test process along with associated activities.
IEEE 1008 provides similar details of the test process as BS 7925-2, but labelled as unit testing. Unhappily, there are no standards that cover other test phases specifically. Both test plans and test specifications should be reviewed; software review techniques are well defined in IEEE 1028.
Incident management, also known as problem reporting or anomaly classification, is an essential adjunct to the testing process. ISO 12207 includes problem resolution as a support process and IEEE 829 briefly covers incident reporting documentation.
More detailed coverage is provided by IEEE 1044, which defines an anomaly classification process and classification scheme. IEEE 1044 is supported by comprehensive guidelines in IEEE 1044.1.
Of the testing phases, currently only the component (or unit) testing phase of the life cycle is supported by standards (BS 7925-2 and IEEE 1008 both cover this phase). This leaves integration, system (both functional and non-functional) and acceptance testing in need of coverage by standards.
A SIGIST working party is currently developing a standard of techniques for non-functional testing (for information see www.testingstandards.co.uk) which should partially fill this gap, but more testing standards are still required.
BS 7925-2 contains definitions of test case design techniques and measures, along with examples of their use. The techniques and measures, however, are not only appropriate for component testing, but can also be used in other test phases.
For instance, boundary value analysis can be performed in all phases. But, because BS 7925-2 is primarily concerned with component testing, the techniques and measures are defined from only that perspective and the associated guidelines, which give examples of their use, also only cover their application to component testing.
Hopefully, you will have found that some of the standard mentioned here cover some aspects of your job as a software tester and can be of some use to you.
Even if you feel your testing is already 'best practice' then at least you will be able to use the relevant standard to confirm your compliance. If you are not at that exalted level, then I suggest you try using some of the standards to avoid re-inventing approaches and techniques that are readily available in the public domain.
Finally, for those who do pick up a standard, please note that standards are generally in two parts; first a normative part, which defines what the user must comply with, and then an informative part, which includes guidance on the normative part.
The nature of standards is that the normative part is difficult to read - do not be surprised at this. Before throwing it away, try the informative part, which is generally the most useful.
Stuart Reid, Cranfield University
A more in depth version of this article is available at www.testingstandards.co.uk.