Steven Furnell, University of Plymouth UK, and Eugene H. Spafford, Purdue University USA, turn the clock back 30 years and show that, though bigger, the internet might not be much safer.

As we grow ever more concerned with cybersecurity, we don’t need to look far to find predictions of attacks and malware that could cripple our infrastructure. What some people forget is that it’s already happened. Before you start thinking you’ve missed a significant headline, know this was 30 years ago. Despite the age, there are still valuable lessons inherent in that case.

The Morris worm

The incident in question was the internet worm, which was released on 2 November 1988 and written by Cornell University student Robert Tappan Morris. If you’re unfamiliar with the details, the motivation behind its creation and release has never been expressly stated by its author, but it was believed to have been exploratory, and it didn’t have an explicitly destructive payload.

However, flaws in the code caused replication and reinfection to occur at a far higher rate than intended, bogging down the infected systems to the extent that they became unusable. Some estimates suggested the code infected around 10% of the systems on the internet of the time.

Of course, the internet in 1988 was significantly different from how it is now. Back then, it was around 60,000 machines, mostly in the US. The internet today will have increased by this number of new devices by the time you finish reading this article!

The security landscape was also different; there were no firewalls to speak of, no commercial services, and most systems were research-oriented multi-user computers. Moreover, malware was not a widely-recognised threat - Fred Cohen’s paper detailed a ‘computer virus’ a few years earlier, and only a few were known.

It’s a very different world now. We see over 350,000 new malware instances identified every day, with numerous malicious and overtly criminal attacks, alongside technology exploitation by nation-state actors. The Internet Worm (also known as the Morris Worm) has the distinction of being the first large-scale cyber incident.

Highlighting weaknesses

Sadly, the Morris worm shone a light on issues that we still haven’t addressed - in particular, the vulnerability of the internet and the individual systems within it. To illustrate the point, let’s consider two mechanisms that it used to infect systems. The first was software vulnerability exploitation, specifically using known weaknesses in the Unix sendmail, finger, and rsh/rexec utilities.

The second route was password guessing, with the worm holding a dictionary of 432 words that were tried against user accounts on target systems in a random order. There is ample evidence to show that similar attacks have succeeded since, and still work today. Consider another internet-wide incident: The Slammer (or Sapphire) worm was released in early 2003 - halfway between the Morris worm and today.

There were some notable similarities between Slammer and the internet worm. Both exploited known vulnerabilities to infect systems (in this case within Microsoft’s SQL Server and Data Engine software). Slammer was similarly non-destructive - its only action was to find and spread to other vulnerable systems. However, the internet had grown significantly, and so Slammer affected a far larger and more diverse set of users. Its speed of replication caused massive amounts of network traffic, causing numerous systems to fail (including five of the internet’s thirteen root DNS servers).

The resulting disruption was estimated to have caused between US$950m and US$1.2bn in lost productivity in its first five days. The main thing that had changed was the level of use of the internet and not the security preparedness of those using it. However, as with the Morris worm, there had been the opportunity for the exploited vulnerabilities to have been fixed (with Microsoft releasing patches for the affected software six months earlier).

Fast forward

Of course, Slammer was 15 years ago, but recent high-profile examples such as the WannaCry ransomware incident suggest that many of today’s systems remain unprotected even when patches are available. Now, we see the rise of the internet of things (IoT) with most of its connected systems configured vulnerable by default, thus allowing malware such as Mirai to exploit vast numbers of them.

As we move further towards an internet of everything future, we are faced with devices that may be more difficult to patch, configure, and upgrade and which users may not even be aware are connected and vulnerable.

It would be timely to start ensuring that more vulnerabilities are eradicated at the source, and with efforts such as California Senate Bill No. 327 and the UK NCSC guidelines, we are finally seeing some of the right steps to at least prevent devices being shipped with weak universal defaults.

What of the Morris Worm’s other entry method, based on weak passwords? We don’t have to look far to see that things haven’t improved much there either. SplashData publishes an annual list showing that the most popular password choices are time-worn choices such as ‘123456’ and ‘password’.

Moreover, even though Morris used a relatively obscure list of password candidates, eight of them (password, football, bailey, daniel, summer, george, jessica and ginger) could still be found within the top 50 of SplashData’s commonly used weak passwords from 2018. Added to this, we still have people reusing the same weak passwords on multiple sites, amplifying their exposure, and the vulnerability of the whole system.

Learnings

While it’s tempting to look back at the Morris Worm as merely a historical example, history is of little use if we don’t learn from it. The era of the Morris Worm seems ancient history for the internet. The technology was relatively new, security wasn’t prioritised as it should have been, and the users generally had other priorities. Sadly, those things haven’t changed very much.

Each new generation of technology still seems to surprise us, as we have seen with mobile devices and SCADA, and are increasingly seeing with the internet of things. Moreover, it’s not as if we have things nailed down in traditional IT devices either. It doesn’t say much for security attitudes and practices when we realise that the same sort of techniques that were used to exploit systems three decades ago would still have a fair chance of working today.

So, with all this evidence, why do we keep sleeping through the wake-up calls? In part, it is down to inertia. We have huge sunk costs in hardware/software, as well as in data, schemas, user training, and interoperability, all of which make it harder to make significant, needed changes as well as leading to more homogenous targets.

Moreover, technologists have failed to understand economics, and legal and policy issues. Relying on ‘penetrate and patch’ for (the illusion of) security is not sufficient - especially in regulated or constrained environments.

Equally, disclaiming use of the software in safety-critical environments does not prevent its use or enhance its security. Additional complexity breeds more problems, and, to date, the focus on innovation has been to build additional layers (e.g., virtualisation, containers) instead of addressing fundamental issues.

Getting back to basics including more rigorous design and development practices, more education for developers, better user awareness, and increasing usability of security mechanisms are the best (and perhaps only) ways to improve the overall cybersecurity situation. If we don’t make these changes in our approach to security, in 30 years time you might be reading a similar (or the same) commentary on the state of cybersecurity.

Don’t say we didn’t warn you, then or now.

Read our related article: Transforming cyber incident response