Some companies know how to produce reliable software. What do they do, how do they succeed? Part of the truth is real hard work. Part of the truth is intelligence. Let's concentrate on the intelligent ways.

The first you do is look into your errors. What are the customer complaints? What is wrong? Get a list of all failures reported during the last month. If they are too many, get them from the last week, or whatever period.

Sort them by cause, by cost, by seriousness, and/or by subsystem causing them. Putting the failure data into a spreadsheet, database, or a failure reporting system, or a statistics package helps you draw nice histograms.

Now pick the worst ones. The worst being the most idiotic, most annoying, most expensive or even some randomly selected problems. Or the subsystem having most trouble. Now call your developers to a meeting. You call it 'This week's failure meeting'.

Present the chosen failures, their cost, and their causes, and then ask people to discuss. They should try to find what could be done in order to prevent this kind of problem, or find it earlier. People learn this way, that problems must be prevented, and that they are important. And people have creative ideas. And they support their own suggestions.

Then introduce whatever solution came up. Make sure it is really introduced, and measure how the corresponding defects disappear.

This method may be implemented using quality circles, or using a process improvement group, or in whatever way, and it is your main tool to find the really important improvements.

What about other cheap ways?

Have you heard about 'The buddy system'? It turns you individual workers into groups of two people. Every software worker shall choose another software worker as his or her 'buddy'. For every work product, require that this product is reviewed, checked and tested by the buddy. Every programmer has a buddy and is buddy himself or herself.

It is a very informal and non-threatening way of reviewing documents. It is a good choice if there is no tradition in technical reviews or inspections. This way of working in a very consequent form is advocated by the people using the newly documented technique of 'Extreme Programming' (www.extremeprogramming.org).

With follow up in the form of seminars, defect counting etc., it will sooner or later get people moving into more formal reviews of the most critical documents and subsystems. People also learn from each other. They soon adopt the good ideas they see others apply. And documents become more standardized.

As others must understand what is written, the authors will make them more understandable. Authors also put more care into them, because they do not want the buddy to find obvious 'idiotic' errors. How does this pay back? It pays by reducing debugging time.

Debugging often destroys system structure as everything is 'fixed'. So it pays by delivering a cleaner system. And it pays off because people know more about what others do in the project. There will be less interface errors. And there is one more benefit: if someone is sick or leaves the project, there will still be the buddy to take over the job. You always have a backup person.

Sounds logical and easy? So why don't people use this idea? It is partly because programmers think of their modules as 'my' module, not 'our' module. Partly because nobody has time to help others check their job.

Everyone has enough to do from before. Partly it is because debugging time is never accounted for. But it works. Data from real projects show it costs 20% extra work time, but you produce things 40% faster. Quality is improved in addition, and this pays off in system testing or after release.

As a project leader, you have to require it, and support it during the first two to three weeks. Because then it costs MORE. After that period it costs LESS. As a programmer you can still introduce it even without your managers knowing. Find someone and start sharing the workstation.

You will turn into a much more productive team. My own experience is: 7 hours extra work where a colleague reviewed my code saved the equivalent of 21 hours debugging. How did I measure it?
I first debugged my code and measured the time: 7 hours test preparation and running, 21 hours debugging and re-testing. Then I gave the original version to my colleague and he found the same defects and more in 7 hours. Fixing them took 30 minutes...

Some cheap ways to test better?

There are a few ground rules for better testing. Get better test examples that cover more of your code and specification. You may spend a week to learn more formal methods. Half of the time in a seminar, half of the time experimenting. But the ground rules are the following:

  • Always test the boundaries: Maximum + or - 1, minimum + or - 1, zero, forbidden or abnormal inputs. Look at the first and last elements in tables, lists and files.
  • If your specifications or design contain any kind of diagrams (data structure, data flow, state transition, control flow, petri net, you name it), then test every box and every connection. Your program must have executed every box, and every connection between boxes. This translates to every state and state transition, every statement in the code, every branch in the code, the variation in data relationships, and access to every record type in the database. Every box, every connection. You may continue if you have the resources by combining connections, or different concepts.
  • If something can be repeated, test the zero, the one and the more than one case. If a maximum is given, test at and beyond the maximum.
  • If you test with a correct input, always try a wrong input, too, and try with missing input, and the default.

These are the ground rules. Apply then on any level of test where you find them applicable, and your number of errors will decrease. Measure the effect. How many customer reported defects per week? How many failures in acceptance testing? The cost for it?

What to do later on?

The effect of the cheap methods is limited. Sure you increase both quality and productivity, especially if you never did it before, but it is not enough to do so. Really high reliability requires more formal methods. Formal reviews and inspections are one choice.

It counts to using people's heads to find defects, instead of debugging after the fact. Formal test methods also exist, on every level of testing. Tools exist. Here is a selection of the cheap tools:
Spell-checker: Never heard of? It is in the File-menu of your word processor. It finds most of the spelling errors in your document (This very linne is nto splle-ckekked).

Static analyzer: Reads your code and reports obvious errors. Wrong data types, never used variables, uninitialized data, interface problems. If you use C, run 'lint'. Otherwise switch on all checking facilities in your compiler. This takes extremely little time, and finds a lot.

The more expensive, but interesting tools in the long run are test automators, test bed generators, test coverage analyzers. Test automators basically capture your manual tests and reply them. Modern ones let you maintain your test library, and they can even be programmed.

Very good tools exist, for automating online test of terminal dialogues, as well as for automating test of real time systems. A test bed generator generates help programs you need to run unit and integration tests.

Test coverage analysers tell you where you have been in the code and where not, and how often, and how much time you needed, etc. Good for finding hidden bugs, and for optimizing slow systems. But all this technology is the second step in your investment.

This is my advice. What I hope is that you start measuring errors, faults, bugs, and failures, their causes and their cost, and that you apply some of these ideas.

If you need examples: I can give you references to companies in many countries who have been successful in this area. After doing it, come back and give a presentation at a conference! Let's improve Software Quality!

Hans Schaefer, Schaefer@c2i.net

Hans Schaefer was the Keynote Speaker at our November 2001 conference. Hans presents courses and lectures on an international basis. He has provided here some hints on how to improve your testing.