When we talk about cloud computing it can come in many forms. At its most simple you can have private internal cloud offering the business processing platforms and applications on demand. However, a cloud can be many things: a public delivery system for software as a service like Amazons Web Service (AWS), an application platform as a service such as Google maps, or infrastructure as a service, as used by the eGamblers who purchase extra processing power an hour before the Grand National.
Whatever flavour of cloud computing you have, they all are characterised by the ability to expand and contract. In a nutshell cloud providers have capacity which they sell as a service on demand. Consider Amazon: for 11 months of the year things are quiet and then a month before Christmas the business goes ballistic. For those 11 months the IT systems just tick along, so Amazon originally developed its web services to get more out its IT asset.
Cloud computing using huge data centres brings huge economies of scale. The customers get software, infrastructure or applications as an on-demand service cheap whilst the cloud provider is able to capacity plan globally, taking advantage of times zone and other region differences.
Whilst the economic benefits are compelling, cloud computing is relatively cheap, the security issues do need to be thought through.
It is obviously the public clouds that pose the greater risk to corporate data, as the normal measures to establish data security such as perimeter security and defence in depth, authentication and non reputation are out of your control.
Traditionally a corporate organisation which hosted its servers internally or at a local data centre, had a lot of control over the physical assets, whilst gaining comfort from the feedback received from the full audit and penetration tests performed annually. By contrast, in a cloud environment, everything is delivered as a service and the end user has no visibility of where the servers reside, the security in place, audit benchmarks, or even the disaster recovery capability.
Essentially you are building the whole relationship on trust. Many infrastructure-based clouds do not even have a contract between the vendor and the client stipulating security and continuity; only the SLA and a monthly fee: if you do ever have a problem, the only recourse is to set up somewhere else after the event.
Of course if you are a small business driven by cost, with little no legal or regulatory drivers, using a cloud might be ideal. If it goes down for a day then why worry? In reality most small firms could manage without IT for a few days, and the answer is to keep a local backup of those important of key files anyway, or make on line backups with a another backup cloud such as Microsoft Mesh. Companies with a need for highly available systems will need to think differently though.
Assuming you outsource an IT function to a cloud then that cloud, or more accurately the data centre where the data resides, can itself becomes a single point of failure. This data centre could be anything from a Tier 1 data centre, a basic computer room with no generator or controlled access, to a Tier 4 data centre with fully redundant subsystems and compartmentalized security zones controlled by biometric access. You just do not know what you are getting.
Aside from natural disasters other more unforeseen things can go seriously wrong. If for some reason that data centre is attacked then so are you.
On Thursday 4 April 2009, Morgan Hill in California had eight strategic fibre cables cut as part of an organised attack that isolated the city. Consider this: whilst only an extremely remote risk, if another client of the cloud data centre attracts some organisation intent on bringing it down then you to are going to be in the firing line as well.
Having handed over your information to a cloud based solution the next consideration is how do you ensure there is a clear separation between yourself and other customers in a multi tenanted environment?
Many network devices at the perimeter such as firewalls, switches and anti-virus hardware appliances will be shared. How all the traffic going in and out is separated out becomes important. Do you know that all site links are encrypted or are some in clear? Are the same admin passwords used for every client?
You need to ask yourself would a cloud provider be actually intent on providing a robust layered security model, or focus just on the perimeter. Unable to conduct your own audit there is no way of knowing if the cloud provider is concerned about security or has a least cost low security ethos. Equally the cloud provider may use its economies of scale to implement best of breed hardware and dedicated 24/7 security staff. Security could in fact be greater in the cloud than out.
Making security the responsibility of the provider is fine so long as your security standards are the same or less than that of the cloud provider you are trusting. If the cloud provider has not achieved the relevant industry best practice standards such as ISO 27001 or Sarbannes Oxley (SOX) then you really do not have any idea, or control, over what is happening. Audits are integral to standards, if they are compliant then you can be satisfied that they have been audited even if you do not know the result. The better known cloud providers do adhered to a standards base approach.
Amazon works to SOX compliance and states in its security policy that it ‘will continue efforts to obtain the strictest of industry certifications in order to verify its commitment to provide a secure, world-class cloud computing environment’.
Clouds are synonymous with the scalability provided through virtualisation. As well as the hardware approach, one way of separating out clients in the virtual world is through the use of trust zones such as those from Catbird, and tripwire and VMware. These provide the virtual defences seen in the current physical environment such as firewalls and intrusion detection systems and monitors.
Sadly, whilst many infrastructure cloud providers have invested in the physical side and associated monitoring their virtual world becomes an impenetrable environment they have little knowledge of what is going on internally within their virtual environment. As this virtual environment grows so does the risks inherent in having a ‘black hole’ of clients. In the physical world certainly the likes of Cisco will be winners in the change to compartmentalisation, provided of course the cloud provider is actually intent on providing a robust layered security model.
Attacks often come from the internal user. As well as the IT provider you may now have other customers of the cloud provision, who are also on the inside, becoming interested in your business. Whilst a hundred virtual SME customers might fit into a single blade chassis environment, the extent to which they are uneasy bedfellows will only be known after the event, perhaps years down the line.
If the cloud looks at a specific market chances are the client next door could be a competitor. This competitor may have IT resources with remote access to their own systems who are tempted to try to have a look-see at yours. How this internal attack is prevented, and audits for unsuccessful attempts, may not be the first thing you think of as you sign up.
Previously IT departments relied on perimeter defence for their main security defence. The Jericho forum were years ahead of their time when they recognised that a hardened perimeter strategy is not sufficient to handle web based services such as that from a cloud delivery system.
Those who provide cloud environment will think more in terms of a sieve or a QOs boundary, and that therefore intrusion detections systems will not be viable at the perimeter. Instead of perimeter defence, authenticated access with transport encryption becomes the key mechanism. Ultimately, the big difference is the move from system level authentication to connection level authentication, and data level validation.
You may not care, but it can be important to know where the cloud actually lives. If you are running offshore financial accounts where the tax implications run into millions then it does become very important. Being in or out of the USA or EU has big legal disclosure caveats. The question becomes can you tie your data to a specific data centre in a specific country, and are you going to be told where the data is to be held.
Google is very reticent to give out the locations of its water cooled, containerised, and held together with Velcro data centres. A quick, well Google search, reveals a very low profile in this regard. Google normally declines to tell clients where its centres reside, or how they replicate data between them.
To produce cloud services it has integrated a global network of computers, known the Googleplex, that may hold multiple copies of data at any given time. Danny Hillis of Applied Minds said of this network: 'Google has constructed the biggest computer in the world, and it's a hidden asset,' If you think about other global players such as IBM or Microsoft, they would not be so different.
Imagine a scenario where your organisation business model and client data all operate out of the UK. The cloud provider does a triangulated replicated backup between the UK, Ireland and Norway then what would the potential disclosure an privacy implications by region of that, and who would police or insure it?