What if you could employ an untiring machine to do this? A machine that can not only look at alerts consistently, but correlate those events over days, weeks and months, identifying if something is outside of normal - without becoming bored?

Humans are not machines

We have long built our security functions and capabilities around the technical experts we employ to investigate, respond and recover our networks from cyber-attacks. To do so at scale, we have built security operations centres (SOCs), tiered analysis, investigation playbooks and various other processes that can never be consistent or truly effective.

How many times during incidents or exercises have you heard “We didn’t have a use case for that activity.” Or, “We detected it, but it wasn’t escalated correctly by the relevant team.”?

This is because our security teams are human - we cannot define every system, every event of interest and triage and assess those events consistently a thousand times a day. We still need security experts - but we must recognise that to truly investigate and scale, we must augment our experts with machines.

Analyst in a box

What if you could employ an untiring machine to do this? A machine that can not only look at alerts consistently, but correlate those events over days, weeks and months, identifying if something is outside of normal - without becoming bored?

User behaviour analytics, machine learning: both are overused industry buzzwords for ‘blackbox’ solutions. However, these technologies can save a SOC by leveraging the best of both: a skilled security analyst to train the machine and increase the fidelity of the alerts and the machine to untiringly triage the events. Together, they become a powerful solution that enables us to modernise the SOC and reduce our dependence on static content and runbooks.

Why user behaviour analytics?

Whether it’s through red-team exercises, internal pen-testing, external attacks, or in the latest threat report, the commonality in security incidents is often legitimate credentials used in an unexpected or in unusual way; either for privilege escalation, enumeration, credentials dumping or moving latterly within the environment. It is use of credentials in order to facilitate these attacks, that allows user behaviour analytics to model and identify when we step outside of ‘normal’ use.

We have a pattern

Although we may not be consistent as humans, we do have a pattern. It doesn’t matter if that pattern is a regular as clockwork, starting our day at 9 and leaving at 5, or if our pattern is sporadic working across time zones and all hours of the day - the important element is this behaviour can be modelled over a period of days, weeks and months, allowing us to identify how accounts and identities are normally used.

A powerful element of UBA is its establishment of ‘peer groups’ - automatically grouping individuals and system identities that behave in the same way, based on user behaviours and resource access. This is more powerful than relying on static groups to define who is in marketing or working on a specific product or project. We now know what is normal for an identity through system interaction, rather than the definition of static rules.

From this, we can start risk scoring activities such as the first time a new resource is accessed; increased outbound internet activity for a user; system access outside of normal hours. Each one of these events is not necessarily a cause for concern on its own, but coupled together... UBA scores these activities, allowing the analyst to quickly identify activity for investigation.

Through risk scoring activities, we move away from binary alerting and having to treat each alert in the same way; we can now put them in context of all the other system activity automatically. This allows the SOC to focus analytical effort on the alerts that have the greatest impact, conveying the users and events that require investigation to the top. This not only increases the value of the SOC but makes better use of the skills, in both humans and the machine learning capabilities we leverage.

Augmenting our analysts

Bringing this together, UBA gives us the capability to better inform our analysts - to augment the triage of alerts and identify where investigation and expertise can be focused. It is through this augmentation, that we will scale the SOC and maintain effectiveness in dynamic and changing environments in the businesses we protect.

UBA is not a magic bullet, nor a replacement for skilled security analysts - but as we move forward, we must recognise that we cannot ask our security teams to triage machine data as effectively as machines. It is only through leveraging capabilities like UBA, machine learning and automation, that will we improve our focus on the events that require analyst expertise.

This is, of course, a simplification of UBA and its capabilities, but it is a useful mechanism to demonstrate where we can augment our security analysts and improve the effectiveness of the SOC.

Our security operations centres require a blended workforce; highly skilled analysts focusing on high value tasks that are interesting to them and the organisations they protect, coupled with machine learning and user behavioural analytics to trawl through machine logs 24/7/365 to find the needles in the haystack.

The problem with tiered SOCs

When scaling a security function, it is often appealing to define a hierarchy or ordering, to maximise the value from security teams and specialised disciplines, such as forensics and reverse engineering.

However, that can come at a cost:

  • Analysts on rails. Early life within a SOC starts with taking a role on the frontline - triaging the 100s or 1000s of alerts that come in and escalating relevant events to more senior analysts. The problem with this ‘frontline’ approach is that it is nearly entirely governed by playbooks and set courses of action. This means we rely on process / past knowledge (sometimes flawed) in order to triage security events, blinkering us to the unknown, the undefined and new threats.
  • Skills and progression. By the time expertise or skills are developed in a ‘Tier 1’ team, analysts often move up and the ‘Tier 1’ position remains largely unskilled. The skills and knowledge needed to identify when an alert requires extra attention is lacking.
  • Binary Alerting. SOC alerting is primarily binary: if you see event X, X time / date / window then alert X. It is then up to the analyst to put that in context and correlate with the environment. However, we already know we cannot expect analysts to be aware of all the log sources, or have the knowledge to pivot around a data point - so how do we correlate the alerts consistently with differing skills and people within the SOC?
  • Alert Fatigue. Arguably the most significant is alert is fatigue. When an event is seen, the analyst gives it focused attention. By the 10th time it’s seen, it becomes normal or repetitive and by the 100th? It is very difficult for us as humans to triage machine alerts consistently and with the same level of assessment.

An option is to consider a less hierarchal approach and establishing security ‘pods’ or cross-functional teams. These will bring together multiple skillsets and experience levels, working as one team to triage security events. This not only enables much faster upskilling of junior analysts, but also ensures more experienced assessment of security events and less dependency on pre-defined runbooks and process.

Security careers in an AI age

Whether you are a security analyst, researcher, pen-tester or incident responder, the skillsets between roles are often similar; combining multiple disciplines, from programming and analytics to deep system knowledge and forensics. These skills and importantly, the analytical judgement that is developed through investigations, cannot be replaced by AI.

AI will enable less focus on the activities that detract away from analytics. The heavy lifting of sifting through alerts, running complex models across masses of data or identification of outliers in systems are all activities that are required before investigation or analysis can really begin. This brings two key roles to the surface:

  1. Training the machines. Modelling the behaviours we care about as analysts cannot be defined by the systems themselves. It requires skilled analysts to translate the behaviours into logic or models that can be defined within the technologies. As the threat landscape, operating environment and technologies evolve, these models will require constant feedback and development.
  2. Human and business context. As we increasingly leverage systems to do the drudgery, a greater focus will be placed on investigation skillsets and the ability to contextualise security events and data. The analyst’s role will evolve to focus more greatly on judgement and to leverage knowledge that simply cannot be programmed into machines - from a business context to applying external threat intelligence or knowledge.

AI will enable an increased focus on the actual application of analysis, rather than the ground work that enables it. In short, it will drive us to concentrate on analytical skills that cannot be programmed.