Mike Barwise from Integrated InfoSec explores the concept of controls. He discusses what they are, how they work and the extent to which they may contribute to a reduction in risk.
Controls are fundamental to risk management, but no control can be 100% effective. There will always be some residual risk to be considered, both when deciding what control to apply (not least as a matter of cost effectiveness) and subsequently when it is operational, in terms of remaining day to day exposure. Furthermore controls hardly ever modify risk as an amorphous whole; a given control will typically either modify likelihood or consequence. So, for example, antivirus tools reduce the overall likelihood of infection by blocking identified malware, but have no effect on the consequences of malware they fail to recognise. Conversely, data breach insurance modifies consequences but has no effect on the likelihood of a successful cyber attack. Therefore, residual likelihood and consequence should be considered independently as well as being combined into a value for residual risk.
Some important preliminaries
The frame of reference for residual risk assessment consists of the previously assessed numerical values of raw likelihood and consequence, the nature of the control (as affecting likelihood or consequence) and the numerical effectiveness of the control. The resolution of the scales used for these numerical parameters should be fine enough to represent actual changes in their levels with sufficient accuracy, but frequently this is not the case. For example, the factor of 10 increments commonly used for both likelihood and consequence scales in raw risk assessment will entirely swamp the degrees of change delivered by most controls. Using such a scale directly, neither the improvement offered by any control with an effectiveness of less than 90% nor the difference between effectiveness of 90% and, say, 95% can be directly represented.
However, if we have to cope (as unfortunately we usually do) with such crude raw assessments, a compromise is possible provided we take their poor resolution into account. The notional starting values of raw consequence and likelihood used for residual risk calculation should be the midpoint of the range to which the raw values were assigned (the median), because, all else being equal, the median value can reasonably be assumed statistically to be the most likely to occur. So for example if the raw risk assessment resulted in a consequence being assigned to the notional range $1k to $10k, the preferred starting value from which to calculate residual consequence would be 1,000+(9,000/2) dollars or $5.5k.
When it comes to likelihood, raw assessments using the commonly used time interval based notional frequency of occurrence scales must first be converted to probability of occurrence at the point of assessment in order for the necessary arithmetic to work. So, for example, supposing the raw assessment of an event’s likelihood falls into the scale slot ‘once a month to once per year’ (and assuming it is reasonable to infer that the event occurs at tolerably regular intervals) its median rate of occurrence would be once in 1+(11/2) or 6.5 months. So its nominal probability of occurring on the day of assessment would be approximately 1/(30.4×6.5) or about 1/198 (0.5%).
However it must be borne in mind that deriving apparently higher resolution results from coarsely quantised raw values leaves us with all the uncertainty intrinsic to the original raw values. Thus the assumed raw consequence in the example above would still range from $1k to $10k, the median of $5.5k being merely the most likely to occur. Similarly, the assumed raw probability for the ‘once a month to once per year’ event on any given day would range from about 1/30 (3.3%) to 1/365 (0.27%). Such uncertainty is the basic reason why we should strive to use higher resolution scales when conducting raw assessment. Otherwise, although our results may appear to be precise they will represent reality quite loosely.
Determining control effectiveness
Ultimately, control effectiveness is a matter of fact. It depends on the characteristics of the hazard, the nature of the control and the organisation’s context, and it is best determined by monitoring once a given control is in place. However, we always need some starting point on which to base control selection, and even in the operational context —particularly in the case of controls for rare events — monitoring may not always be practicable, so we may have to resort to estimation. It can often be possible to make a reasonable estimate based on sources such as community experience or (in the case of technological controls) vendor specifications. But supposing there is insufficient evidence, an estimate of 50% effectiveness may be a tolerable (if usually pessimistic) starting point, as most useful controls are likely to be at least that effective.
Doing the sums
The effectiveness of likelihood controls is always relative — they only ever make a proportional reduction to raw likelihood, because probability is itself a relative measure, and, with only a few exceptions, consequence controls also have proportional effectiveness. So once we have figures for raw likelihood and consequence and for control effectiveness, working out residual risk is quite simple. The key thing to remember is that we are dealing with residual likelihood and consequence, so if a control has, for example, 75% effectiveness, the residue (its ineffectiveness) — in this case 25%— is what we use for our calculations. That understood, to obtain the residual consequence or likelihood we simply multiply the raw consequence value or raw percentage likelihood by the percentage ineffectiveness of the control.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
The median values of residual likelihood and consequence are usually the most appropriate anchor points for residual risk evaluation, but we should not forget to also calculate residual values based on the extreme limits for both raw parameters as determined by the quantisation of our reference scales. If we ignore these limits we only have a very uncertain picture of our real residual risk (which is inevitably a two dimensional range, rather than a single finite value). But once we have maximum and minimum values for residual likelihood and consequence, we can multiply them to obtain the notional bounds of the specific risk. These can usefully be plotted graphically as an ‘uncertainty space’ — an area bounded by the four products of the extreme limit values of likelihood and consequence, within which the central tendency is represented by the product of their median values.
The fact that controls affect likelihood or consequence independently makes a strong argument for representing both raw and residual risk in this way, as it directly indicates the levels of potential improvement delivered by individual controls. As maximum reduction in risk for a specific hazard might depend on a combination of separate controls for likelihood and consequence, identifying their independent contributions is important for optimisation of protection.
Some caveats, however
Most importantly, a control that effectively reduces a specific risk may also have side effects. For example, encryption of data at rest can significantly reduce the consequences of bulk data exfiltration, but it will require robust key management, introducing a new risk related to key management failure — so while confidentiality may be improved, availability could potentially be reduced. Any such side effects should be identified, subjected to independent risk assessment, and controls should be implemented to manage them.
Finally, we must never forget that all results of risk assessment (both raw and residual) are only estimates. For those estimates to be really useful, our scales for both raw and residual likelihood and consequence should use the highest practicable resolution, because this is key to reducing the epistemic uncertainty that otherwise dominates assessment. Only once scale resolution has been optimised can our assessments consistently, validly and comparably reflect the potential outcomes of the hazards we are assessing, as opposed to being based on generalised assumptions about them that may accord insufficiently with reality.
About the author
Mike Barwise is a veteran information risk management consultant with a background in systems engineering. He has contributed to national and international risk management standards development and is keen to collaborate with fellow experts in improving the realism of information risk management and practitioner training. Mike can be contacted at mbarwise@intinfosec.com.