The UK, mainly in and around the M25, is home to more than 50% of Europe’s data centre estate much of which is operating systems and applications for the Financial Sector. Businesses are no different from the population at large when it comes to ‘always available, instant service’ and so data centres are designed and built with high levels of redundancy so that even very bad events cannot easily disrupt the IT service in question. Things such as grid blackouts are shrugged off even by the lowliest facility with batteries and diesel generation standing by to supply all the energy that the user needs.

Of course data centres consume ever increasing amounts of power that has attracted the critical attention of governments, both national and European - although the main driver is the populations’ demand for social networking, gaming, gambling and photo/video services. Mind you, the governments of the world haven’t yet woken up to the fact that rolling out ever-faster broadband only serves to accelerate power demand, but that is another matter.

So data centre energy effectiveness has been getting better over the last few years with the likes of Google publishing their PUE (an energy metric describing the ratio of ICT energy to facility input energy) as if it is some kind of marketing campaign. They now run at 1.12 - so the electrical and mechanical systems that power and cool the ICT load represent 12% of the ICT energy itself. Clearly ‘1.0’ would be perfect but levels of security, availability and maintainability without shutting off the service make that impossible.

The financial sector has some very special needs that make a Google-like PUE target of 1.12 almost impossible - however, this should not be an excuse for either energy or financial inefficiency. In London a PUE of 1.25 is a perfectly reasonable target, even for a bank and growing cost efficiency pressures are slowly curing the habit of running even development servers as if they are critical trading platforms.