It's one of the top buzzwords, but before you jump on the virtualisation bandwagon, make sure that you know what you're getting into. Frank Boesche explains that you need to be careful as you make your way into a virtual world.

In September 2006 I attended the IDC Virtualisation Forum in Toronto, Canada as panel expert. The subject of this forum was the 'ROI of Virtualisation'.

Having been dealing with this subject for the last two years on an ongoing basis I found it a good opportunity to share some of my experiences and help organisations finding the right approach.

While many of the questions were of technical nature - 'how do you operate the environment?' 'How do you plan for capacity?' - some questions allowed the focus on how to get there and why to 'Virtualise'.

While being there I, of course, attended many of the presentations throughout the day. Not only to see how my contribution could add value to the overall content of the event but also to get some new ideas. The major key messages I saw were:

  • Cope with limited budget
  • Build centralised infrastructure

Of course, and this is the idea of such events, the subject of virtualisation does not mean much or the same to everyone. So lets look into it.

What really is virtualisation?

Wikipedia's definition of virtualisation is: 'In computing, virtualisation is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration.

This new virtual view of the resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources. Commonly virtualised resources include computing power and data storage.

'A new trend in virtualisation is the concept of a virtualisation engine, which gives an overall holistic view of the entire network infrastructure by using the aggregation technique.

Another popular kind of virtualisation, and currently very referred to in the media, is the hardware virtualisation for running more than one operating system at the same time, via nano-kernels or Hardware Abstraction Layers, such as Xen.'

In simple terms hardware virtualisation involves the emulation of hardware components and the ability to make an operating system believe it's the real thing.

There are many more types of Virtualisation:

Storage virtualisation

A technology to superimpose a (theoretical) unlimited number of storage resources over shared physical storage infrastructure, i.e. SAN (Storage Area Network). An example of such an environment is something I deal with quite often these days, namely the provisioning of virtual file servers through NAS (Network Attached Storage) technology, META-LUN’s, iSCSI LUN virtualisation etc.

Desktop virtualisation

Also associated with hardware virtualisation whereas desktop operating systems are installed onto emulated hardware.

However, another form of desktop virtualisation is provided by thin client technology, hereby users accessing a remote desktop on a multi-user operating systems. Today such OS (Operating System) would be Microsoft Windows in Terminal Server Mode.

Network Virtualisation

First available through Cisco Systems VLAN technology allowing logical segmentation of network infrastructure on the MAC (Media Access Control) layer.

Security Layer Virtualisation

One form being the ability to provide virtual firewalls hereby cost effectively creating a network architecture which not only increases the level of security within a corporate or service provider infrastructure but also allows a business to establish a tiered, Service Oriented Architecture (SOA) in network infrastructure terms.

Network Name Virtualisation

More known as clustering and network load balancing

Last but not least Application Virtualisation

Existing in various forms such as multiple-instance application or database services, application partitioning and hosting partitioning, often known as virtual private servers, ISPs offering supported by Virtuozzo Operating System Virtualisation technology. Even thin client application publishing can fall under this category.

Back to the original subject of hardware virtualisation. The technology has been on the market for quite some time, actually since the dawn of the mainframe and originally known as hardware partitioning. Emulation of hardware components (hardware virtualisation) on the Windows platform was first developed by VMware - now an EMC company.

Microsoft followed suit with the acquisition of Connectix. Connectix started with Virtual PC for MacOS then ported the product to Windows. Microsoft moved into a competing position by pushing an evolution from Virtual PC to Virtual Server (the desktop heritage still persists in the product).

A system comprising emulated hardware is known as Virtual Machine or VM. This however, is not to be confused with a Java Virtual Machine or similar technologies. Having said that there are similarities in the concept as in the provisioning of an isolated environment for the purposes of running software.

I mentioned key messages learned from the IDC event. Here are some other messages that came across:

  • VMware still has a strong market share. ESX is pre-dominantly being implemented.
  • No particularly significant implementation of Microsoft Virtual Server
  • Spend your money wisely. Many might agree that $17,000 for a single underpowered server (3GB RAM), a single point of failure and migrating from multi-purpose physical to multi-purpose virtual still has to prove ROI. All cases had one common denominator, centralisation of server infrastructure
  • Operations and standardisation clearly benefited from virtualisation server imaging became more efficient
  • The processes around disaster recovery improved due to the simple fact that virtual infrastructure was easier and faster to restore
  • Hardware cost decreased. Licensing cost - of course - not.

All in all it was a very interesting and useful event. Of course not much new revelation for me but still the fact that IT people from all levels of management, administration and support could share their views and experiences during breaks as well as lunch sessions.

The strategy

When I joined PwC I noticed virtualisation had been on the table for quite some time. Also, it was not really identified as virtualisation but as server consolidation.

When I took on this, quite abandoned, subject matter - in October 2004 - I began by finding out why it was there in the first place.

A number of facts:

We leased servers. Very tax efficient, like leasing cars this is a full write-off. Furthermore it' a widely known as cost effective path towards technology refresh. So where does the problem lie?

Answer: The amount of server leases to be managed. Over a hundred servers a year. The process of managing a lease replacement is quite tedious as it requires:

  • Planning of decommissioning, commissioning and the transition
  • Namely rebuilding the original server configuration
  • The hassles associated with compatibility issues on newer platforms, especially for legacy applications
  • Some configurations lack the original distribution files, so migration is up to the ingenuity of the support person
    Time and effort
    If the person who originally built the server has left all the knowledge is lost
    The paperwork pileup and resulting overtime aka Asset Management

Other issues:

  • Server maintenance. There were some servers that were owned. All of those servers were outdated and expensive in maintenance costs.
  • A large part of our server infrastructure was distributed throughout our offices in the entire country.
  • For historic reasons server builds differed from office to office
  • A large number of servers where underutilized

Here is where virtualisation came into place. My strategy was based on these and further elements essentially resulting into two streams.

  1. Consolidate underutilised servers, mostly infrastructure utilities, while maintaining high-availability in the data center
  2. Consolidate and standardise server roles in all offices, creating our virtual office infrastructure.

For the Data Center I envisioned VMware ESX and Vmotion for the offices Microsoft Virtual Server 2005 Release 1 (current release at the time).

While working on the strategy I gained in-depth experience with a production version of Virtual Server (fresh from Microsoft) and got to know all its Strengths and Weaknesses. In fact during that time I had the opportunity to pass my mindshare on to Microsoft's Kurt Schmucker, Virtual Server Program Manager and former Connectix executive.

Moving forward…

Strategy was followed by POC (Proof of Concept) followed by pilot, followed by deployment. Before going to deployment I spent intense hours developing the overall deployment plan, preparing and hosting pre-deployment workshops and finalized a production release of a Rapid-Deployment & Support Toolkit (I started developing this toolkit during the Strategy/Lab phase).

Starting with one individual, the POC phase involved already two.

From strategist, to technical lead and Architect to PM, to trainer, to deployment manager, I had to balance all the different activities in different geographical locations and keep tabs on various technical issues including networking, migration, rollout and server swap. Yes, server swap.

One of the challenges, and possibly PM masterpieces, was to coordinate the rollout in a way that we could leverage brand new server hardware already deployed in offices and deemed virtualisation hosts. Due to some history we had this hardware deployed as domain controller replacement (see lease-ends).

On this note I should add that the strategy included various specifications for servers deemed to run virtualisation software (published as Best-Practice to PwC global forums). The characteristics in questions were 1U Dell servers with 4GB of RAM.

The rollout plan was quite similar to a complex strategy game, in which you have to find the only solution to achieve a ship/return-waste-nothing/loose-nothing approach. At the end we achieved the rollout of 65 servers to 21 offices, plus 21 benchmarking stations, plus 21 hosting servers.

In total we deployed 107 systems, all this within two and a half months. In fact the ambitious goal was to deploy within one and a half months. Still comparably impressive.

The benchmarking idea was developed by our client support team, we shipped out Windows XP based Virtual Machines which conduct objective benchmark tasks.

Not only did we migrate server roles manually, but we also restructured from multi-purpose to single-purpose server environments.

Naturally this involved manual steps, which in fact was a faster and more established approach than the one involving commercial migration tools. Furthermore, wherever we had Windows 2000 we now had Windows 2003. So, migration and upgrade at the same time.

The big win: dedicated servers make lockdown and delegation more efficient. Naturally we operate an active directory infrastructure and GPO (Group Policy Objects) based on server role as well as management delegation was no issue anymore.

We also adapted our backup approach, which in turn supported the envisioned disaster recovery methodology.

The future of IT: utility

What's next? Well while working on all this I realised early on, during the strategy development, that there is more to virtualisation than a mere operational approach.
Initially considered to be suitable for development and lab systems, virtualisation was an excellent vehicle for provisioning and delivery. In addition there was a growing need by development teams to jump onto the virtualisation bandwagon.

So, in Summer 2005, I went about to develop a business model which describes virtualisation as the foundation for a new service to provide systems, on-demand, or, in other words, introduce IT utility. In order to get this done I used my past experiences from service provider and systems integrator environments and mapped it to our situation.

Based on this business model we started to build a hosting farm, which within the first two to three months of launch delivered a value of around $200,000. Did I say value? Did I say delivery? Yes indeed, a new aspect to IT Operations and IS departments.

In fact not quite so new. The term 'IT as a business' has been around long enough, in fact since the 1970s when ITIL® (Information Technology Infrastructure Library) was first developed.

The shift from an operational, cost focussed IT operation to a service oriented paradigm has been subject of industry discussion for a few years now, especially in the dawn of outsourcing and offshore IT.

We indeed managed successfully to use a platform, which did not seem to be designed for this purpose at all: Microsoft Virtual Server. I used the power of 64bit technology in order to maximise performance and capacity of our platforms (Release 2 was finally available and supported 64bit).

Our farm is growing and what was designed for rapid deployment for virtual office is now the core delivery system for our IT Utility services. Our infrastructure supports now more than 200 virtualised systems. Business is Good.

Some wisdom

Going back to IDC's forum, some of the wisdom I passed on was:

Think strategy first

Evaluate why you are doing it (know your goals), where it helps your organisation and the business as a whole, find the opportunities, think long term. Look at achieving cost reduction; server maintenance is one of many opportunities.

Structure

Virtualisation enables you to create an infrastructure by the book, efficiently maintained, managed and secured

Think Business

In order to get buy-in from management and the business, work backwards. Do not try to find justifications, which might convince the business, instead directly target the business goals. Understand where the business is going and what it needs to achieve its goals. Then identify where the virtualisation as technology and approach fit into those requirements.

Here is some additional wisdom:

Phased approach

When developing your goals think about working in phases. Some organisations have limited manpower, so prioritising achievements is of the essence.

Dedication and motivation

Have a dedicated team and make sure that tasks and responsibilities are clearly defined. Educate and motivate your staff as this will help you meet your goals.

Know how

Know the technology (virtualisation) and how it fits into your operational environment. Adapt the technology and make it work. Ingenuity and innovation are the key.

Business as usual

Change your perspective but not your methods. Management and technical staff often do not realise that in fact very little changes in technical terms. Essentially you only replace real hardware with the virtual equivalent. Your normal operational processes and management techniques should not change.

Educate, educate, educate

And finally, don’t rush into it.

An unplanned and unprepared move into a virtual world will, with certainty, generate more headaches to catch up in the aftermath. Don't be complacent. The old wisdom of 'don’t fix if not broken' has never been of benefit.

If virtualisation can improve your environment, pursue it with consequence. Often enough, simply migrating old systems means also dragging on with old mistakes using new methods. In that case consider re-structuring technology.

Frank Boesche is a manager with PricewaterhouseCoopers Canada. He leads currently a national thin client/utility initiative and is an active advisor to that industry.