Almost every day there seems to be a new report of a collapsed or collapsing technology project. For example, recently we heard that the overall cost of the Metronet-led refurbishment of London Underground is significantly behind schedule and over budget by an estimated £750m.
IT failings, including poor IT integration, lack of supply chain software and rising IT costs are reportedly to blame for this. And this is not Metronet's only technology failure: The uploading of new software in late November caused a total breakdown of the Central line - one of the busiest - in the Monday morning rush hour.
Such failings are unacceptable - and have a lot to do with the way the projects have been handled.
Climate of failure
There's a general climate of failure in public - and private - sector IT projects.
This means that there's a worrying and real danger that we've become so accustomed to such reports that we have at the same time become indifferent to them, essentially regarding such projects as 'Just another government IT project failure' - which is of course unacceptable.
We need to wake up to the fact that IT disasters are preventable and not a 'necessary evil' we have to put up with.
Public and private sector organisations now need to move away from the idea that software development is an "art form" and start recognising it as a managed business process which can, and must be, successfully regulated and managed like any other project within a business. One approach which can help with this is Application Lifecycle Management (ALM).
ALM - which connects the various phases, activities and assets involved in delivering software - attempts to turn today's chaotic software delivery process into a more controlled and predictable procedure.
An important aspect of ALM is the ability to manage software quality throughout the entire lifecycle - whether it's validating if the requirements are accurately defined, or that the application has been tested properly. Ironically, quality is the linchpin of the software delivery process, but it's one which is often overlooked.
Excellence shouldn't be an afterthought
Quality within software delivery is frequently an afterthought, meaning that testing often begins late, so that problems are difficult and expensive to fix; and generally it's done manually and only by one organisation involved.
Some companies put quality consciously to the bottom of the list when making the trade-off of delivering something on time or on budget - a route which can only lead to software disaster, while many software delivery teams merely react to defects and requirements.
A more mature approach is clearly needed in order to deliver higher software quality while, at the same time, lowering development costs and improving time-to-market. With this in mind, quality simply should become part of every stage in the lifecycle of a project.
Of course, any concept of 'quality' can be subjective but in IT development it definitely needs to be measurable in terms of the union of requirements, correct code and minimised defects that align to meet business goals.
Prevention better than cure
Software delivery should take a proactive, preventative approach rather than the reactive, waterfall approach that many in the industry are still in favour of, even today. Prevention is better than cure - anyone can relate to that.
Companies need to focus on improving their ability to confidently and consistently deliver higher quality software which in turn will prevent the IT waste from project failures we're so often seeing.
Defects caused by insufficient quality testing may have several reasons, e.g. licensing violations, poor design, code readability, ambiguous requirements or security vulnerabilities - and yet all these can and should be prevented, simply by ensuring quality is a priority throughout the whole development process.
All too often, quality checks don't begin until the testing phase, at which point much time and many lines of code have passed. However, a preventative approach to quality must start much earlier, even before code has been written.
Quality checks should be added at the point of project definition, tested earlier and more often, and traced throughout every phase of the software delivery lifecycle.
In short, we need to bring quality control to software development from inception, rather than waiting until it's already in the testing stages. This 'Test before you Leap' approach guards against the root cause of many IT project failures today.
Test before you leap
Companies involved in software projects can take two steps to ensure a more mature approach to quality. Step one is to focus on pre-deployment and preventing issues early in the lifecycle, not finding and fixing them later on.
Testing early, consistently and often is imperative. That way, defects can be isolated before they impact other parts of the projects and the system - or even lead to the complete collapse of it.
Step two is to ensure that quality is no longer an addendum in the software delivery process. Rather, it needs to be captured in project definition and be one of the main priorities, from the very beginning of the project.
It should encompass complete requirements, correct code, and minimised defects, and be aligned to the overall business needs of the organisation.
The woeful record of IT projects in the public sector is a case in point that poor software quality can lead to enormous consequences - yet quality is still not taken as seriously as it should be.
Although it was always clear that the London Underground project was going to be complex and difficult, quality assurance and proper testing were seemingly not on the agenda.
Now, more than ever, it's time to break down the barriers between business development and quality assessment need and solve quality issues earlier in the lifecycle of the project. If we 'test before we leap', we are likely to read fewer articles about IT project failures in the newspapers.