Concerns about IoT security are on the rise. Harsha Kalutarage, Bhargav Mitra and Robert McCausland, R&D Engineers at Queen’s University, Belfast, discuss how lightweight anomaly detection could provide an additional armour for the internet of things (IoT).

There is a tsunami of digitisation; of network-enabling household and industrial devices, many never intended to have been exposed to the web. Nowadays a toaster can be connected to a smartphone via Bluetooth or Wi-Fi, and controlled via an app. And, more disturbingly, an internet-connected door-bell can be enlisted into an army of bots controlled by a malicious actor intent on launching a distributed denial-of-service (DDoS) attack on websites or critical infrastructure.

These network-connected devices, with generally limited capability, make up the internet of things (IoT). They offer enhanced convenience and the promise of a seamless digitally-connected life. However, with this increased connectivity, each device extends the potential ‘attack surface’. As we will see later, it is generally easier to exploit vulnerabilities in IoT devices when compared to standard computer infrastructures. A vulnerability in the exposed service means that the sheer number of devices can provide an easy route for an amplification attack.

At the other extreme, subtle alteration of data captured by IoT sensors could drive changes in behaviour which may benefit organisations that engage malicious actors. Hackers at a recent DEFCON - the world’s largest hacking conference - demonstrated how temperature settings of unsecured IoT-boilers can be tampered with, and in 2015, security researchers - Charlie Miller and Chris Valasek - exhibited how cyber-criminals can remotely make a modern car uncooperative to its driver.

Security is best delivered in a layered approach; while efforts progress to develop preventative controls for secure IoT at the hardware level, additional controls are still required. For example, lightweight machine learning techniques are adequately suited to anomaly detection of data emerging from IoT devices, providing an additional detective security control.

What makes IoT security difficult?

There are a number of reasons that make low-cost IoT security harder to deal with compared to traditional IT-security.

One reason would be the environment they inhabit. Some IoT devices use wireless connections and may be accessible from outside the physical boundary (e.g. CCTV and porchlight). Attackers can easily physically access such devices and their signals, thereby gaining access to the internal network. Inside the security perimeter it may not be entirely obvious that the new coffee maker has a network capability and some accidental (mis)configuration may put the internal controlled environment at risk.

Vulnerability patching is regularly executed on standard IT infrastructure. This is not the case in the world of consumer IoT security; patching mechanisms either do not exist or are poorly implemented.

Finally size, power and computational restrictions leave little room for these devices to become extended with the currently available security controls normally applied to traditional computer infrastructures. As a marketing mantra ‘easy to setup and use’ dominates over any sense of security. In this context, IoT devices remain soft-targets for hackers.

The scale of the problem is best demonstrated by capabilities of search-engines like Shodan. Using such tools, it is easier for attackers to locate vulnerable IoT-devices connected to the internet.

Why use machine learning for analysing IoT data?

Estimates show every household in the UK will own at least 15 internet-connected devices by 2020. A detective security control to support the security of such a large number of devices needs to be automated and able to quickly distinguish between benign and malicious traffic. Machine learning (ML) and anomaly detection can provide this detective security control for IoT devices1 2.

IoT devices are usually designed to carry out specific functions with limited computational resources - memory, processor and beaconing network connections. These constraints, however, make IoT devices less complex compared to general purpose devices. This limited capability can be perceived as an opportunity from an ML perspective. It would be possible to automate the analysis of IoT traffic data - establishing norms for benign and malicious behaviour - using ML.

In traditional IT settings, using ML for security monitoring is a challenge due to the diversity of behaviours of computing devices. For example, on a typical enterprise network, even most basic traffic characteristics (for example, bandwidth, duration, services) have huge variability. This makes it difficult to find a stable notion of ‘normality’ for these variables, resulting in unpredictable network behaviour over short periods of time.

In contrast, imagine a scenario where a smart door-bell from a particular manufacturer becomes a popular product in the market. It can be assumed that all doorbells from the manufacturer will exhibit similar behavioural characteristics as they run on the same firmware, and users would not be extending these devices by installing third-party software or hardware. Using ML with IoT data a general model can be developed to describe benign behaviour and hence spot malicious behaviour. Further, the limited capability of an IoT-device would imply less chances of a ML model drift.

How to model normal device behaviour?

ML can be viewed as a mathematical model building process in which the blend between data and an algorithm plays a crucial role. The performance of an ML model depends on the selection of an appropriate algorithm from the ML algorithm tool-box. The selection of the algorithm, in turn, depends on the context and the characteristics of the data.

To build an ML model, training data can be collected from both benign and malicious behaviour of the target device. In security applications, it is usually difficult to gather examples of situations that represent malicious device-behaviour. Collection of such instances can be expensive or just impossible - for example, zero-day attacks.

Data-collection is also often hindered by legal, ethical and privacy issues. However, collecting instances of benign behaviour could be generally deemed as a straightforward task. For this reason, detective security controls usually target unsupervised or one-class ML approaches, where the single class attempts to capture the normal device behaviour characteristics.

What is one-class based modelling?

The central idea in one-class modelling is to build a trained model using IoT benign data instances (previously seen). The trained model is then applied to live data to identify data that is incongruous with the benign behaviour of the IoT device. In other words, the one-class model detects whether the instance is an inlier or an outlier with respect to the trained class. Any non-conforming instance is treated as a malicious event.

To provide the initial support to our hypothesis in this article, an experiment was conducted where two detectors - a one-class Support-Vector-Machine and an Autoencoder - were trained using benign traffic data of an IoT-doorbell and subsequently tested on a mixture of both benign data and Mirai virus-infected data. The classification accuracy achieved through both models was as high as 99.9 per cent.

A detailed description of the analysis, including the R code snippet, can be found on GitHub - Putting data into action / IoT Security.

A long way to go

IoT is a fascinating, fast-growing, and emerging field that will increasingly feature in digital transformation. However, it has become much more lucrative for attackers and difficult for security practitioners. A range of preventative and detective security controls will have to develop to support security practitioners in protecting IoT devices, and machine learning will play a critical role in this journey.