Andrea Kearney and Mark Rands of PA Consulting Group provide a guide to the essentials of software testing, and introduce approaches that enable early realisation of benefits and ensure systems work for users.

Why test software? Testing is often under-valued by organisations due to a lack of understanding of its purpose within the development life cycle and the benefits it can bring to businesses.

Organizations shouldn't measure testing purely on cost and time but should look at the value it can bring. At PA we use a framework (Figure 1 below) that allows us to focus testing in areas of greatest benefit to enable shorter delivery timescales and earlier benefit realisation. We also believe that usability is key to the successful adoption of new systems by user communities and have developed an approach to ensuring user needs are taken into account.

Framework used by PA

ISEB reasoning can be summarised as: 'Software is normally written by humans, humans make mistakes, and when they make a mistake (e.g. when coding), the software is said to contain a fault (or defect or bug.) If the fault is "executed", a failure may occur (a failure being a deviation from expectation). Testing is the process of detecting faults before software is made operational.'

It is the cost of these failures that answers the question 'Why test?' and this cost can be measured in many ways, for example:

  • price to fix;
  • loss of customer confidence;
  • loss of market share;
  • more drastically, loss of life.

So, in short, testing is important to ensure that we avoid costly failures.

Ensuring maximum value is obtained from the testing process

A business definition of testing should consider the outcomes of testing as illustrating the value it provides to the business. For example:

  • reduced risk of failures (or incidents) once systems are transferred to live operation;
  • demonstrative proof that business requirements have been met;
  • assurance that the system will function appropriately with existing legacy systems where required and will integrate with other systems as necessary;
  • assurance that the users for which the solution was designed are able to operate productively.

Acknowledging these benefits requires accepting the reality that testing costs money. Too much testing could be risky, as it may delay product launch and allow a competitor to steal significant market share. Unfocused, inefficient approaches to test management often result in poor return on investment in testing.

As a rule of thumb, sufficient testing is where the costs of testing can be balanced against the potential costs of failures and over-run.The risks of failure and business benefit should be used to determine how much testing is performed.

At PA Consulting we have developed a framework approach to testing, which enables our clients to make optimum use of test activities and to minimise the duration of the test execution, to ensure maximum value-add from testing.

The approach encompasses a number of standard engineering techniques to ensure effort is focused on the critical areas of any system or solution, and a major element covers tactical planning to avoid unnecessary inactivity during test execution.

How is testing performed?

Software applications can be large, regularly exceeding 100,000 lines of code. How are such applications tested? The industry standard approach is to 'divide and conquer' into phases, using a V-model (Figure 2 below) mapping between test and development phases to indicate the level of testing to be performed at each phase.

V-model

Individual units of code (also referred to as modules and components) are first tested in the unit test phase. It is normal for the development team to perform this test phase (but not necessarily, in fact preferably not, the developer who developed the unit.)

The units are then integrated and tested in combination during the integration test phase. Again it is the software developers who normally undertake this phase. Once all the units have been integrated to form a complete application, and the integration test phase completed, the application moves into the System Test phase.

An independent system test team checks that the application is implemented as designed, and includes the verification of non-functional and functional requirements. A functional requirement states 'what the system is to do', for example, 'allow an administrator to delete an existing user'; a non-functional requirement states 'how well a system is to do its work', for example, a requirement could be, 'the system must respond within two seconds to 90 per cent of transactions.'

Following system testing, the next phase would be system integration if the system was to interface with other systems. This phase ensures that any interfaces between systems function correctly and demonstrates that the dataflow between them is as required.

All of this activity happens within an operationally equivalent test environment. Environments cover both the location and hardware on which software tests are performed. This phase is also used to verify that business processes can be followed across applications to perform a given business task or operation.

The final stage of testing prior to beginning operational validation is user acceptance testing (UAT), where the users tests that the system meets their requirements. This phase usually varies in detail and formality depending on the nature of the application being provided, and the level of user involvement in earlier testing phases.

Within all of these phases it is possible to identify the elements of the system that are critical to success and those that matter less. This is the basis on which tactical prioritization and dynamic planning (pre-arranged contingencies should things go wrong) helps to ensure focus is applied in the right areas and that the most is made of time and resource available during test execution.

Despite all this testing, you might still end up with a system that users find frustrating, which leads us to...

Combining traditional UAT with usability testing

This provides a holistic and user-centric approach to ensure that systems are productive and usable. Whilst traditional UAT methods are highly effective at assessing whether a functional requirement has been fulfilled - does it do what we said it would do and what you asked for? - they do not establish whether the system is effective in allowing users to complete their tasks end-to-end across a number of functional components and interfaces.

By applying usability testing methods (borne out of website testing and human-computer interaction) in addition to traditional approaches you can also test how effectively the system fulfils its users' needs.

Usability testing puts users in front of the system and presents them with tasks to complete without any assistance. These are facilitated sessions where the facilitator does not train the users or guide them through the system; instead they encourage them to describe what they are doing as they strive to complete their task.

Typical exchanges are:

User: I'm not sure what to do next.
Facilitator: Why?
User: I can't find the right button to do X.
Facilitator: What are you looking for and which one do you think might do this?

The whole experience can be  recorded digitally - capturing audio, the screen interaction and the users' reactions - and the insight from this evidence is used to identify the obstacles the system might present for its users, as well as informing potential solutions.

This valuable insight allows you to optimise the experience for the users. Including usability testing in the project life cycle helps to improve user adoption of the system, ensures users can complete tasks more effectively and assists user training by creating a more usable and intuitive system.