It is well known that software development projects have a long history of failure. Over three quarters of projects today run late, while errors cost 80 per cent of the average project budget to fix.

Certainly no other business process today is allowed to endure this sort of failure. But software development is often left to chance, despite the importance and cost of the process. Gordon Cruickshank, co-founder of eoLogic, looks at what developers must do to improve their project success rate.

Most projects fail because of a profound lack of visibility and transparency into development processes. This lack of visibility only increases with the complexity of projects and IT environments, and the physical distribution of software development teams.

This often means that projects that are outsourced are even more challenged when it comes to visibility, transparency and control. One of the major challenges that companies must seek to address is the ability to gain accurate and transparent visibility throughout the software development lifecycle, not just in testing.

Yet, today's systems for collecting, testing and reporting on software development are simply not fit for their intended purpose. According to Forrester Research, software development processes are managed today based on traditional project management tools, which they rightly claim as being a 'state-of-the-art 40-year-old process.'

As IT environments become ever more complex the pressure from the business to deliver robust software architectures grows on the shoulders of software architects, developers and testers.

Where is the complexity?

Enterprise applications have always been complex. Extremely high levels of concurrency are needed to process transactions from hundreds of simultaneous users, which in turn makes balancing resources and maintaining transactional integrity difficult. The characteristics of the multi-user production execution environment differ greatly from that of the development environment, which also contributes to the potential for project disasters.

Some improvements have been made. The architecture of enterprise applications is often more structured now than in the past; when the database was often left to handle the bulk of the work, but other factors have steadily increased application complexity.

The advent of ever-increasing transaction throughput, increased levels of user access, expectations of improved interaction and the growing need to integrate disparate systems has made the design of modern enterprise systems something that needs great care and understanding if the finished system is to be reliable and achieve high levels of performance.

The huge growth in outsourcing of software development to low cost markets, such as India and Eastern Europe, has resulted in challenges further down the application lifecycle in areas such as quality assurance, testing and software knowledge.

The benefits of cheaper code development are obvious, but the ability to ensure that it will stand-up in a real environment has often not been given enough importance by organisations focusing too much on speed of delivery. Faulty architectural issues discovered during late development always result in expensive and time consuming re-works and delayed deployments.

Recently organisations have begun to emphasise the re-use of existing applications by making these available as sets of independent services through the development of service oriented architectures (SOA). An increasing trend towards industry consolidations, particularly in financial services and reinforced by global recessionary forces, will drive the need to integrate systems following company mergers and acquisitions.

SOA is an excellent way to integrate systems with different architectures and often different base technologies, but its very newness and lack of precise definitions can cause major IT headaches. The blending of tools and applications together, to service enable them, is not a simple process and therefore the ability to understand and validate the 'new' services is important in the successful emergence of SOA-based IT environments.

Finally, increasing enterprise IT complexity has similarly expanded the role of consultants assisting IT projects. As the global economy slows, the pressure on businesses to reduce costs will see the use of external consultants diminish.

This leads to the challenge that these consultants retain considerable know-how about system architecture and software design. If organisations are to cut back on consultants then they need to look for more cost effective ways to retain the knowledge and know-how of their IT environments.

Where next?

The expanding burden of complexity placed on software developers and testers means that there is a need for a new way to look at software quality assurance. Waiting until the testing phase to detect architecture problems, using large scale load testing with massed virtual user simulations, is leaving it simply too late.

Leaving it too late is all too common: NHS and Heathrow Terminal 5 are recent examples of the devastating effects of discovering serious problems too late in the development lifecycle.

Organisations must look at the way they test and manage the quality of software as it is being developed in order to reduce the negative impact that such problems create. Without greater visibility and validation of IT environments earlier in the development process the impact on reputations, revenue and customer service can be severe.

Tools are now available that predict and detect complex reliability and performance problems much earlier. By analysing software frameworks at runtime, predictive software quality assurance solutions can map and validate systems automatically, visualising and checking service processing sequences, and guiding developers through an intuitive visual experience underpinned by knowledge tools that provide rules and best practices for software development.

By running predictive software quality assurance tools within application development environments, developers can detect construction problems as soon as they are introduced. It is well-known that the cost of detecting and fixing applications issues grows exponentially over time. These new solutions hugely reduce development risks and can easily cut costs by 30 per cent, delivering high quality software faster.

As organisations look to tighten their belts in 2009, many will aim to get more from existing assets by using service-orientation to integrate and expand their capabilities. Successfully understanding existing systems and controlling these complex initiatives during development will determine their success or failure.

Developers must look at how, when and where they undertake software quality assurance and seek to do it earlier in the lifecycle of new applications and services to reduce risks and costs.

Now is the time for IT professionals, developers and testers to review the way they build enterprise applications and herald a new era of predictive software quality assurance to ensure the delivery of high performance, reliable and resilient applications that maximise the availability of business critical operations.

Gordon Cruickshank is the founder and CEO of eoLogic, a pioneer in predictive software quality assurance solutions. eoLogic's eoSense is the first predictive software quality assurance tool to detect complex reliability and performance problems throughout the application lifecycle.