The push for green computing is currently from two directions: saving money on electricity bills, and corporate responsibility. So, where is your electricity being used in IT and how might you make these savings?
A couple of years ago, Gartner stated that around 2% of energy in the world is being consumed by IT equipment. This matches the amount consumed by aviation and illustrates that IT is a significant consumer of energy. At around that time, BT carried out a study and reported that 0.7% of UK energy is being used in data and telecoms centres alone. IDC published that between 1996 and 2006, the number of servers had increased by 6 million and the power per server increased from 150 watts of electricity to 400 watts. This figure can be expected to have grown since then, though with the advent of virtual computing, that growth may be being reversed .
In terms of energy usage, IBM estimated that 40% of power in a server is consumed by the processors, the remainder in disks or memory. On top of that, another 60% is consumed just to cool the server (in a conventional chiller aircon environment). This means that running a server harder (in terms of processor) is very much cheaper than spreading that load across hardware; the less physical servers the better. The Robert Frances Group also estimated that the average server processor was running at only 15-20% of capacity. This means that most servers have plenty of available capacity to enable such a reduction in the physical server count..
A few other interesting snippets of information to refine minds:
- Doubling the speed of a processor quadruples the power consumption. An increase of processor speed from, say, 500MHz to 4GHZ, an 8-fold speed increase, would result in a power requirement 64 times larger.
- Intel has a road-map towards more and more cores, 30 is already under consideration, but each running at a slower speed so as to minimise power usage. We are facing a future of very large numbers of cores, but clocked back so as to minimise energy consumption.
I presented on the subject of green computing a year or two ago and shall use the same method I used then to consider potential savings. I have created a mythical company of 100 staff with applications provided by an infrastructure of 10 servers. Each person has their own PC (so 100 PCs in total) and own desktop laser printer. Each personal computer comprising an old 17 inch CRT monitor. The servers are in a single server room with its own conventional aircon. This probably matches many SMEs across the country.
What I have done is listed a number of areas for power reduction, showing estimated savings of each from the original electricity budget. Sadly, these savings are not always cumulative. For example, saving electricity by putting an old, inefficient PC on power save, will be far more than putting a new, energy-efficient PC in power save.
- Replace CRT monitors with LCD ones. Most people will be aware that CRT monitors consume far more power. LCD monitors are also reputed to be better on the eyes, are smaller on the desk and are much easier to mount on monitor arms to meet Health and Safety requirements. The company could expect to save 5% of its PC power costs by rolling out LCD monitors.
- Switching on PC power save options. Without having to purchase any software or hardware, simply using the PC power save options, to turn off hard disks and monitors, could result in anything up to 6% of electricity savings. This assumes the PCs are left switched on overnight and weekends which many seem to be.
- Switching those PCs completely off overnight and weekends would save an additional 2%. Note though, that although there is software capable of doing this, beware the user who leaves their machine on all night with open documents. Ensure this is not your boss, as they will not be happy the next morning to find their changes have been lost!
- How old are those computers? A modern PC is going to be much faster and yet run on far less electricity. Up to 17% can be saved with new hardware.
- Replace printers with workgroup devices. Any laser printer consumes most of its energy in heating up the developer drum. A single desktop laser can consume almost as much power as a modern workgroup laser. By replacing those 100 desktop printers, with 10 workgroup lasers, could save up to 20% of power; yet be faster, be better quality and also reduce the cost per page due to more efficient printer cartridges. Whilst we are at it, enabling power save on those printers will save another couple of percent at the expense of having to wait a little longer for printouts if the power save is active.
- Swap those 10 servers for a virtualised solution. In this case, I would expect these 10 servers to be absorbed by, at most, three hosts. Without going into too much detail, 7% savings are very easy to achieve from the overall budget. You can also expect some savings on your aircon as it is having to work far less as less heat is being generated in the server room.
- Moving up the technology complexity, what about removing all those PCs and replacing with a thin client and using VDI or remote desktop solutions? This will increase the number of servers, but vastly increases security (no data outside the server room). A standard Linux-based terminal is likely to consume only around 35 watts (excluding the monitor) compared with a PC of around 70 watts (though substantially less when power saving). The Sun Ray ultra-thin client consumes just 4 watts, though note, you then need Sun Ray servers which will themselves consume energy.
- What about moving those servers to a Data Centre? A conventional aircon is always going to be inefficient. It will be using fans, pumps and worst of all, a chiller unit to transfer heat from the computer room to the outside and bring back in cooler air.
It is, of course, also bringing in dirty air from outside which is why computer rooms are always dusty, even though kept as a supposed clean room. A way of checking your computer room efficiency is to calculate its PUE or Power Usage Effectiveness. This is a number reflecting the total power consumed by your computer room dividing by the power consumed by the actual IT equipment. Obviously, some power must be consumed by aircon, lights etc.
A PUE of 3.0 might look good, but is actually very inefficient. The major Data Centres are now aiming at a PUE around 1.5 or better. To reach these efficiencies needs a new aircon technology. These use heat exchangers with the outside world and a sealed internal airway. When external temperatures are below about 20 degrees the aircon system is consuming only a small amount of power to run its large fans.
Between around 21 and 27 degrees, this cooling is supplemented by water fountains across the heat exchangers (requiring only water pumps to be powered) and only above 27 degrees will the conventional chillers be used. In my part of the Country, this can be just a small number of hours a year! A secondary advantage from the sealed air system is that it is no longer bringing in dirty air from outside and the continually recycled air is likely to get cleaner and cleaner as it passes through the filters resulting in a spotless server room.
Sadly, there is a sting in the tail of using a Data Centre and, rather surprisingly, it is due to ‘green legislation’. In 2010 the Carbon Reduction Commitment Energy Efficiency Scheme (CRC for short) came into being. Originally aimed to come into force in 2011, this will now take place in 2012 for any company using more than about £500,000 of electricity a year.
Large Data Centres will be consuming this sort of power and will need to purchase their carbon ‘credits’ every year. This additional cost will have to be passed on to clients. We might end in the ridiculous situation where servers in the Data Centre consume less carbon (so is better for the planet) but the financial cost would actually be lower by continuing running your own server room, as these are currently exempt from CRC. For this reason, I have not quoted any savings for the Data Centre, since I do not have access to the CRC costs.