But before I discuss my three highlights I have to say how much I enjoyed the quality of the presentations; the depth, breadth, content and delivery were all well up to our expected high standards.
Highlight number one is that metrics are essential for any testing process and subsequently any development process improvement. The various metrics and ratios presented by Dot Graham gave valuable insight as to how the tester can and should develop bench marks for various development life cycle processes.
Raw numbers are not good enough. It is the ratios that persuade those involved to take notice. It is the old question of how well did I do compared to how well did someone else do. Management in almost any organisation begin to take notice when numbers are presented in the form of 'how well' compared to 'how well could it have been'.
The astute manager quickly latches onto key ratios and does this to ‘get a handle on’ the business problem. Old stagers at the SIG will remember early work done on this and presented by Brian Winterborne. The current work done by Dot and her colleague Mark Fewster add valuable additions.
Enough managers are sufficiently vain to want to know how well their division compares to both other divisions and some established norm. For further reading I suggest you explore the excellent book, 'AMI A quantitative approach to software management' Addison-Wesley, 0-201-87746-5, 169 pages, produced by the Application of Metrics in Industry User Group.
The second key point came from the presentation by Mark Harding of Microsoft. Success or failure of testing is so dependent upon the interpersonal skills and communications of the test team leader; diplomacy is essential. The role of the tester is never easy as it so often entails pointing out the failings of others.
There is a need for the tester and the developer to share common goal of excellence. The test team leader should see to it that the combined team of developers and testers share his discomfort over the prevailing level of quality. From this shared discomfort comes the common goal of product, and eventually process improvement.
It becomes the role of the tester to facilitate this endeavour. Mark shared with us that his best testers often come from users, who just want to be part of the product improvement process.
It is the ordinary users who understand the implications of poor quality in the delivered application. They understand the cost implications and can present this to the developers. For further reading I strongly recommend 'I'm OK, You're OK' by Amy and Tom Harris.
The final point is on requirements traceability. I cannot recall the point in my teaching at which I modified my programmer knowledge of CICS (yes, good old IBM mainframe CICS) and CICS tables, to the conundrum of linking the test event to the business requirement.
I must have made this link (excuse the pun) a long time ago as I have used the concept of the Thread Table for ages. Anyhow I digress; Anthony Finkelstein's talk, with an abundance of wonderful detail, brought us up to date with developments in traceability. Traceability is set to become the next big issue and it is an issue that will not go away.
Traceability was a hot topic at STAR West in San Diego, with some dozen tools up for discussion. From the tester's point of view, requirements traceability must be such that there is an unbroken, two-way, link from the requirement to the test results. The test status must be directly linkable to the business objective.
The business objective must be directly supported and assured by the satisfactory conduct of a test or a suite of tests. Any break in the chain means that the thread is broken and the project will unravel. There needs to be a link that will stand the closest audit. This leads me on to consider the role of testing as a risk reduction process.
The traceability must include the assessment of the riskiness of the initial business endeavour, followed by an assessment of risk for each function and the associated non-functional attributes. The risks range from what will occur if the system is not implemented through to the implemented system going wrong in a number of ways.
Clearly the traceability must include the tests being performed to develop confidence that the risk has been contained. At this point we are perhaps back to common shared goals of the developer, the tester and the user of an application. A good book for further reading is Software Requirements by Davis, Prentice Hall 0-13-562174-7, the book has 521 pages of very useful information.
If you do one thing after reading this article I suggest you re-read the presentations from the last SIG. If you do two things I suggest that you get the books and continue your research.
If you do three things, and this is the difficult one, I suggest you get your manager to come to the next SIG, especially if it is on high level issues such as those presented last Monday.
Geoff Quentin, Founder chairman of the SIGIST