The field of information security is often described as managing the risks associated with using information technology. A closer look at the nature of information security, however, shows that this description applies to very few real-world situations. Understanding why this is so may give insights into why some traditional risk management approaches do not work well when applied to security, says Luther Martin, Voltage Security.

The term 'risk' has a precise meaning in risk management, and is defined to be the average loss associated with some event. So if there is an event that will cause a loss of £10,000 and has a 10 per cent chance of happening, then this event represents a risk of £1,000, or 10 per cent of the £10,000 loss.

Applying this model to information security is difficult, if not impossible, because we rarely have accurate estimates for the chances of security-related events happening or the damage caused by these events. What are the chances of your email being intercepted and read? How much damage would it cause if a hacker could read your email?

Our lack of accurate data in such cases makes information security more about managing uncertainty than about managing risk. If we have accurate estimates for probabilities and losses then we are dealing with a risk; if we do not, we are dealing with an uncertainty.

The American economist Frank Knight first noted the difference between risk and uncertainty in his 1921 book Risk and Uncertainty. Since then, economists have devised three general ways for understanding uncertainty and the way that people make decisions in the face of uncertainty.

One way to account for the decisions that people make in the face of uncertainty is to assume that people use their best guesses for probabilities in the absence of reliable data. People are not good at estimating probabilities, however. We tend to systematically overestimate low probabilities and underestimate high probabilities. So we might expect people to estimate that an event with a 99.9 per cent chance of happening to have only a 90 per cent chance of happening. Or we might expect people to estimate that an event with only a 0.01 per cent chance of happening to have a 10 per cent chance of happening.

The bias that creeps into such estimates makes estimates of risks that are based on such estimates inherently inaccurate and we should be wary of placing much faith in such estimates. Some types of security compromises are fairly rare, and it is likely that their chances may be routinely overestimated, resulting in too much being spent on counter measures designed to reduce the already-low chances of a compromise.

If we have a compromise that could cause a loss of £10,000, we should expect a rational firm to spend up to £1,000 to eliminate this exposure if the chances of the compromise are 10 per cent, but spend no more than £1 to eliminate this exposure if the chances of the compromise are only 0.01 per cent. So we can expect that inaccurate estimates of probabilities of security compromises to make a difference in business's willingness to invest in information security projects.

Another way to model the decisions that people make in the face of uncertainty is to generalise the value of the loss associated with an event, using what economists call utility instead of the financial value of the actual loss. Utility accounts for all of the possible ways in which the value of something is determined and even includes the effects of irrational preferences or prejudices. And although it maybe very difficult to estimate utility, we can often infer from peoples' behaviour. So if we see that people are willing to spend £100 to eliminate the effects of a security breach that will happen 10 per cent of the time, we can infer that they place a value of £1,000 on the loss caused by the breach.

The final, and perhaps the most interesting, way to understand decision-making in the face of uncertainty is to generalise the meaning of an event to let us handle cases that might cause different behaviour. To understand the value of an umbrella, for example, we could divide days into two classes: rainy days and rain-free days. We can then estimate the value of an umbrella as being its value on a rainy day weighted by the probability of rain plus its value on a rain-free day weighted by the probability that it does not rain. It turns out that we can use this idea to understand why seemingly valuable information often remains relatively unprotected.

Much like the value of an umbrella depends on the chances of rain, the value of information is highly dependent on context. If you pick two random laptop users and have them switch laptops, both will probably end up unhappy with the exchange because data that may be very valuable to one of these users will probably be virtually useless to the other. One user might be a sales executive for a software company whose laptop contains a list of important customer contacts and pricing information for the products that he sells. And although he and his employer may perceive the value of this information as being very high, to the average laptop user this information is useless.

The other laptop user might be a marketing manager at a plumbing fixtures company whose laptop contains information on the future plans of his company to change the composition of the bronze alloy that they use in certain pipes and their plans to outsource the production of these pipes to China. This information is similarly valuable to its owner, but is of little value to most of the world. If these two laptop users manage to switch laptops, they both will find the data that they end up with of essentially no value, even though it was extremely valuable to its original owner.

Suppose that each of the owners of a laptop believes the value of his information is £1 million to him and his company but essentially zero to the remaining 99.99 per cent of the population. Then we might expect such an owner to behave as if the data on the laptop was worth £1 million weighted by the 0.01 per cent chance that someone with a similar interest in the data would end up with it, or only £100.

Surveys of laptop users suggest that they estimate that the data on their laptops is worth over £500,000, yet many businesses do not use full-disk encryption for their laptops, a technology that carries a total cost of ownership of roughly £100 per year, even though there is roughly a 10 per cent chance per year of any laptop being lost or stolen. Could the uncertainty of the value of the data on the laptop explain this behaviour?

So the lack of accurate data on both the probabilities of events and their impact gives information security aspects of uncertainty management instead of risk management and can severely limit the effectiveness of some of the techniques of conventional risk management. The current trend of unifying corporate risk management organisations and information security organisations will probably make understanding the differences between the two models increasingly important as it becomes more important to have an accurate way to determine which risk management projects are worth funding. This may not be difficult, however, as there seem to be easy ways to extend the standard risk management models to include the effects of uncertainty.

Luther Martin is principal cryptographer at Voltage Security - email: martin@voltage.com