Andy Wall FBCS, from the Office for National Statistics, highlights insider threats and the ethical dilemmas that organisations face in providing security assurance through screening and background checks.

Data is becoming the lifeblood of modern society. The rapid development of business technology is a revolution that facilitates the processing and analysis of more data, in richer and more complex forms. The outcomes for commercial organisations and government departments from this work inform medium and longer-term decisions about policy and services.

Organisation crown jewels

At the heart of this revolution is the collation of huge amounts of data - commercial, personal, business, intellectual - the ‘crown jewels’ of the organisation. There are massive benefits from this to organisations and societies but also dangers.

For a host of reasons including processing capability, efficiency, management, access and of course protection, there is a tendency to centralise data holdings either in internally managed infrastructure and systems or external hosting (I’m not going to get into the cloud, data can be anywhere discussion).

These big data collections also result in big, juicy targets. Datasets that a range of threats would love to get their hands on. These external threats are serious. There’s plenty of evidence and examples to illustrate that hackers get access to bulk data.

To counter this, particularly from external attacks, technical security has improved significantly over the last few years. The security industry has harnessed modern tools, improved business approaches and has developed some good products - if implemented well, mind.

But how is one of the greatest threats to data addressed? The threat from the people that organisations consciously give access to. The insider.

The insider issue

Lots of research over the last few years shows that humans still do the most remarkably stupid things even after significant investment in training and awareness. Research by Willis Towers Watson into data breach claims in 2017 showed that 66 per cent of cyber breaches were down to employee negligence or malicious acts.

Further analysis showed that roughly 90 per cent of all cyber claims were the result of some type of human error or behaviour. This highlights the weaknesses of people within the organisational risk management regime.

Genuine errors abound: clicking on email links, poor password choice, sharing logins, losing equipment etc. That’s just the accidental side, but what about the malicious side?

In 2016 IBM’s Cyber Security Intelligence Index found that 60 per cent of all attacks were carried out by insiders. Three-quarters of these involved malicious intent. This means any up-to-date threat assessment should have an insider high on the list.

These are the staff, suppliers and contractors that we give authorised access to our systems and data. We trust (and monitor) them to do their job and generally behave. Our privileged users who often have open access to bulk data within systems, have lots of knowledge about the system architecture, configuration and tools, plus they have the capability to cover their tracks, which means that they are a seriously high threat.

This gives rise to some basic questions: How do we assure our staff? What trust can I take from their character? Can I investigate their background and make judgements about them?

Employment screening

As an employer you typically won’t know much about your new staff member beyond a CV and an interview unless you go mining on social media or you have a personal connection. Staff moving roles within organisations could give rise to the same concerns, especially in large organisations.

To help with this, organisations often screen staff - perform checks on their background to validate their declared history. As a government department our basic background check is a baseline personnel security standard as a minimum. This is a range of identity and criminal checks to validate who a person is and whether they are suitable for employment. Commercial organisations do similar things through screening processes with the same aim. Generally, us humans accept this as part of applying for jobs.

For more sensitive business operations or access to particularly sensitive data such as intellectual property, economic information or personally identifiable information, organisations often need to do more detailed screening due to this sensitivity and its higher business value.

Is screening ethical?

Employee screening can raise some ethical dilemmas - as an employer, do we have the right to delve into the background, personal life and character of our staff? What happens if something bad is discovered? Is it right that screening could affect an employee’s suitability for a role?

In government terms we have National Security Vetting that can probe this deeper level of background. Commercial organisations can perform their own versions of this through checks beyond the criminal and into civil records, publicly available information and social media or employ the services of specialist companies to do this on their behalf.

This sort of screening can appear shrouded in myth and misconception - certainly for a government security vetting. Over the years I’ve seen staff visibly worried they could lose their job if they fail a vetting, staff concerned that financial information comes out which they’ve not even told their partner about or staff protesting that the intrusion in their personal life results in colleagues knowing something about them never divulged.

What right does an employer have to investigate its employees background?

The law generally allows organisations to monitor staff. Much of the technology we implement has this built in through logs - emails, internet browsing, system activities. UK law allows checks for things like the right to work, criminal checks and health checks for certain roles where the applicability is clear.

Other legislation allows government to perform vetting for national security reasons. The results of this screening support increased assurance about an employee’s character and enable judgements on their ability to initiate or be influenced to perform malicious activity from the inside.

What employers can do

The case for employee screening is strong. It plays a crucial part in a layered security regime. In many ways it actually protects them from accusations and enables trust, but they still get nervous. Employers can do much more to reduce the myth, fear and nervousness through greater transparency. Being honest and upfront about it is beneficial for all.

  • Policy: Get the approach to screening defined, document it clearly and make sure the organisation accepts the need and supports it. Not all roles are the same so not all screening should be. Those roles with privilege or involved with sensitive processes and data can be identified to determine the specific screening that you want to apply so the security assurance is focused on the right roles at the right levels. This approach should be agreed by all key organisation stakeholders and then published so everyone in the organisation can see.
  • Recruitment: You should be very clear up front about the screening required for roles with external candidates. They need to understand what you will be doing as part of the recruitment process. You should also indicate the information you want to know, why you want it, how you will use it as part of the selection, and how you will manage it afterwards. This gives the candidate the choice of whether to apply and what to expect if they do - they’ll appreciate the honesty.
  • Internal moves: You should be very clear to inform staff that some roles have screening and an identified level. This information should also be published internally. Again, you are being transparent and this gives candidates the choice of whether to apply and what to expect.
  • Data processing: Screening can be a heavy manual process that naturally collects lots of personally sensitive information. The revised Data Protection legislation 2018 that incorporates the General Data Protection Regulations, clearly specifies how you need to store, process and retain personal data related to screening. You need to be very careful with it!

About the author
Andy Wall FBCS CITP, is the Chief Security Officer at the Office for National Statistics where he is developing new approaches to secure the operations of leading-edge, big data analytics that support the production of official UK government statistics.