Andrew Griffiths, managing director of Lamri, looks at the potential of grid computing.

Well, what is it? Grid is a service for sharing computing power and data storage over a network or the internet. Grid computing has the design goal of solving problems too big for any single supercomputer whilst retaining the flexibility to work on multiple smaller problems.

Doesn't really sound very exciting? In its current technical and commercial incarnations it really is a bit, well, underwhelming. But let's open our minds to what might be if grid concepts could be applied at the application level in a connected world (ignoring a few very major technical difficulties).

Imagine a world where you can buy access to computing applications (not just computing power) and storage on demand. Today you don't buy software licences but you purchase run time for software components that execute specific business functions. Your client relationship management, finance application and logistic applications are 'rented' from an application broker, your data is stored on a grid resilient data farm.

When you have a performance problem you don't re-engineer the application or buy more hardware, you put together a specification that will be placed in a reverse auction with an application broker. The specification uses the now standard component definition from a public interoperability library. This, together with an XML specified set of application 'tweaks' and a performance specification, is all you need to crisply define your needs.

If you place the auction in the morning by lunchtime all the leading component suppliers have provided their responses and they include performance estimates validated by an application broker. When you make your choice of suppliers, pre-testing is conducted by the broker and a sandpit is made available on which to conduct validation testing.

Given our user interface remains static all we need to do is run the automated regression test suite to check that we are ready to go. If the tests go well we can push the button and be home in time for tea.

OK this is fantasy land - however if it were possible what would it do to the IT market? What would it take to happen? Service providers would be able to increase their reach from corporations to much smaller companies as most of the capital costs are shared.

Portable outsourcing (POS) would become feasible, moving standard functions, and process to the cheapest offered resource at a given time providing considerable reductions in in-house effort and capital investment.

Using the public grid or a shared private grid for failover in mission critical applications will also generate considerable savings in capital equipment. Even in this fantasy world there are some key issues that would require addressing.

Bandwidth becomes critical, as does communication latency, anything other than blisteringly fast and reliable will drastically degrade performance and the amount of error recovery software required will not be for the faint-hearted.

Applications running in this environment would need to be loosely coupled and capable of supporting massive multiple threading, two areas where UK plc is not exactly strong today.

Regardless of difficulty the financial opportunities presented by grid are huge. POS holds the potential to allow organisations to bid for computing resources, running the code in a suitable grid environment dependant on the application execution profile (another area of minor technical difficulty).

This removes the long-term tight tie-in to a given outsourced service provider. The flexible market created by grid will allow companies to free themselves from the heavy investments in software licences and tin.

Even if the total cost of ownership is higher the flexibility provided will help manage business risk in uncertain times. This is not a traditional view of the use of grid to businesses but will become a key driver to investment as there is real profit to be made.

Another interesting area is environment simulation - the potential for older operating and application environments to be completely simulated within the massive computing power of the grid will allow some expensive-to-maintain hardware environments to be decommissioned and the application execution moved onto virtual environments.

Application level grid will require massive changes in software development practice, modelling and architectural thinking to be successful. When you consider the length of time it has taken for model-driven architecture (MDA) to make even a small dent in the real world (MDA would be a building block for many of the higher-level features required for my fantasy to become reality) it is clear that it will take many years to reach this goal.

A grid-based market economy will take time to mature and it is private grids that will tend to dominate the market before open grid resources become commonly available. I am also not convinced that anyone really wants to 'step back into all the old COBOL code' until the relevant hardware is really dead and buried.

I had hoped IBM might really drive the grid agenda at about the time they purchased Rational. I was working with an old friend, Don Kavanagh, on a review of the development tools market and a review of the commercial application for grid computing. At the time I mused that IBM might just be executing a brilliant strategy based on the commercial application of grid computing principles at the application level.

Rational would provide capabilities to deal with the reference model question, probably building on MDA principles and with their deployed customer base for modelling providing a great platform to narrow the debate about how the reference model should be deployed.

This would have opened up fantastic opportunities for IBM, considerably lowering the bar to entry for strategic business applications and providing a platform for 'renting' usage of computing applications. IBM has moved into this area with 'on demand business' but it falls short of my aspirations for them.

In the shared and hosted application space there have been a number of successes. My personal favourite has been Salesforce.com. They provide you with a full, web-based, client relationship management package, which you can buy access to with a very short window of commitment for a few dollars per user.

This model has driven sales force automation into small and medium companies and surprisingly some very large companies have taken this route. The binding of the end-user into the specific application is too close to create a flexible market but companies are using this as a route to reduce their fixed overheads and provide access to technology that they could not normally afford.

I have always liked the potential of grid computing but simple pragmatic market choices like these could all but kill the need for grid in mainstream business.

Grid is not a primary focus for CIOs yet and many have never even heard of it, or that the largest grid application has five million users around the world. Any other technology with a user base of five million would be considered 'mainstream'.

I suppose I should come clean and tell you what the application is: SETI@home. I know, searching for extra-terrestrial life does not really lend credibility to the example but free access, from existing hardware, to two million years of aggregate computing time since 1999 should be of interest to any CIO.

I suppose that we will really know that grid computing has arrived when we see adverts on the back of The Economist. Funny that, I saw one of them last week. Watch out... The world is about to change, maybe.

Andrew Griffiths is the managing director of Lamri, a software process improvement consultancy. Thanks to Don Kavanagh, director of research, Greengrid, who helped with this article.