When decisions are made to reduce or enlarge the scope of a prototype, it is not possible to separate the effect on code cutting from the effect on testing. Testing isn't subject to separate timeboxes and a prototype iteration only deliveries tested code.

Within a timebox, testing is done in parallel with the development of code, and it happens simultaneously at several levels. I call this a "broadband" testing process: Programmers unit test their code, as they do in traditional projects, but they hand over their code for public scrutiny much sooner and start integration testing almost immediately that two related components are built.

As the number of components increases, and as the components mature towards their target functionality, integration of low level interfaces to testing of the combined functionality of the overall product. In other words, it gradually becomes system testing. At the middle of a timebox, there are parts of the prototype undergoing testing at all levels in parallel. The whole process resembles a round like, "Row, row, row your boat".

One further, distinctive level in this broadband process is that users are, from the outset, validating that the system will meet the needs of the business. They determine this through demonstrations during the timebox, but more importantly through hands-on access to an increasingly functional prototype.

The difference between the broadband process and the narrowband linear unit, integration, system, acceptance stages is not that less testing takes place, or that fewer testing resources are needed, but that the timescale for testing is shortened. The testing is completed within days or even hours of the last code development for the iteration.

With this come a number of benefits. Early dynamic testing means low quality is detected when it is cheapest to fix. The iterative nature of the development means that components and the system are tested and retested frequently, which establishes regression testing early.

Though technical prototyping, there is hard information for specifying, analysing, predicting, testing and measuring non functional, technical characteristics, like performance. Finally, there is no protracted post-development period while the users discover whether they have been delivered their dream bicycle or a rotten egg for Christmas.

The latter is a key point: most errors in production systems are not bugs in the technical sense, but conditions where the system just doesn’t do the right thing for the business.

Narrowband testing attempts (and mainly fails) to avoid such errors by focusing on verification against intermediate, IT-internal products which are abstractions of the users requirements. The result is that there is often a long, contentious user acceptance test and major, surprise re-work to do before and after implementation.

Concerns about how to implement a broadband testing process are undoubtedly justified. It requires a rethink and difference approach. Developers adopting RAD find themselves having to change long-standing practices, and the same is true of testers.

If the change doesn't come naturally, it's worth seeking advice from qualified consultants in the field. I hasten to add, I'm a tester these days, not a consultant. My remarks are neither academic nor a sales pitch.

On a daily basis I use the broadband testing process I've described, and it works. It finally fulfils the aspiration of the professional tester to "test early and test often". It really can be the best testing you've ever done.

Paul Herzlich, Tester, Tradepoint Financial Networks PLC