The need for strategic security

January 2017

VirusMartyn Thomas CBE FREng looks at current vulnerabilities in our systems and compares cybercrime to a disease - what can be done at the project level?

Successful IT projects need to be cybersecure throughout their development and to deliver software that is secure against the inevitable cyberattacks in service. This is not as easy as it should be because the software on which we all depend has not been developed with cybersecurity as a top priority.

Although it is widely accepted that cyber attacks are a large and growing problem, companies have not given enough priority to making their products secure by design, and national cybersecurity strategies have focused on short-term palliatives rather than on addressing the heart of the problem. The result is that our commonly used software platforms and components expose us all to unacceptable risks.

In the short term we have to protect ourselves tactically as best we can but we cannot go on this way forever: we need a strategy that will make our important software as dependable as we expect other engineering components and structures to be.

If cybercrime were a disease, we would describe it as a pandemic and mobilise international resources to eradicate it at its root. Instead we merely advise the adoption of better cyber hygiene, treat major outbreaks when we have to, and accept the rest of the damage as part of the rent we pay for living in a cyber-enabled world. This is not sustainable. The problems we face with the systems that we currently have installed are just the beginning.

Legacy of vulnerability

Cybercrime is too easy to commit and too difficult to detect, disrupt and prosecute. Where there is money to be made, we shall always find unscrupulous, greedy and criminal behaviour. Already, hospitals have been attacked with ransomware and paid the ransom, but there are millions of other potential victims and hundreds of ways to attack them.

We have a legacy of vulnerable systems throughout our businesses, our infrastructure and our homes. We have built a society that depends on computer systems for defence, healthcare, leisure, manufacturing, transport and commerce, and most of these systems were designed and built before the threats from buffer overflows, injection attacks and other vulnerabilities were widely recognised.

Resistence is useless

These widespread vulnerabilities undermine other national strategies. Cyberspace is not geographic, so we cannot defend Britain at our borders, nor can we continue to project military power overseas if our national infrastructure (both “critical” and less critical) can be disrupted or disabled in response.

What would be the use of an aircraft carrier against an adversary that has demonstrated the power to disable our hospitals, GPS receivers, or network routers? What will we do when terrorists make determined use of cyber attacks?

The number of vulnerable systems grows every week and will continue to do so. We want the benefits of the Internet of Things, driverless cars, robots, intelligent control of our homes, and information and entertainment instantly and everywhere. We value features and convenience above safety and security and so that is what we are offered and that is what we buy. Many products that we are offered contain networked software that is written in countries that we regard as potential adversaries. Cybersecurity strategy cannot be divorced from industrial strategy and remain credible.

Inadequate protection

Current cybersecurity strategies are inadequate against current and future threats because they are largely based around two fallacies. The first fallacy is the belief that you can show that a system is adequately secure by testing it and fixing the vulnerabilities that you find.

Testing is by far the main way that software developers currently decide that their software is fit for purpose, even though we have known for decades that the vast majority of system states and input sequences would remain untested even after months or years of running tests.

As a trivial example, consider the integer expression c = 1/a-b which will fail (divide by zero) whenever a equals b. It is not practical to eliminate this class of errors by testing the software. Penetration Testing for cybersecurity vulnerabilities suffers from the same problem. As a way to ensure software correctness or cybersecurity, testing is a category mistake.

The second fallacy is that computer users can be persuaded or trained to behave completely securely, when to do so would be inconvenient at best and might even be impractical. When commercial, official and private emails commonly have attachments and embedded links, people will continue to open attachments and follow internet links even if we tell them that this can be unsafe. It typically only takes a dozen or so targeted phishing attacks to penetrate a company.

Damage limitation vs proper strategy

If systems are designed to be used in a particular and convenient way and we then tell the users of those that they shouldn’t use the facilities that have been provided but do something less convenient instead, we can be certain that the advice will be widely ignored; both academic research and practical experience have demonstrated this beyond doubt.

If we want to ensure that attachments and websites cannot damage systems then our attachment formats, email software and browsers must be designed so that they cannot expose users to risks. We need software and data formats that ensure that it is not possible for a text document or a picture file to execute malicious code; this will take time and strategic action and if it means that we have to withdraw some functionality that is rarely used, few users will notice and fewer still will care. Current cybersecurity strategies are really only short-term tactics to limit the damage.

A strategy worthy of the name must aim to create a future where the security of software-based systems can be guaranteed and where assurance is based on scientific principles that are as strong as those we expect to be used for other important engineering products.

Other engineers base their work on established science and their methods accumulate the lessons from mistakes that have been made in the past, but many software developers still prefer to use tools and methods that have led previous generations into mistakes that are easily avoidable, such as buffer overflows and the misuse of data types that allow cyber-attacks such as SQL or other command injections.

The solution

We need software components that come with guarantees that they cannot be made to overflow or crash. We need browsers, email clients and apps that come with explicit claims about their security, and warranties that the claims are true. This is no more than other engineers expect from their components, tools and subsystems and we have to work towards the same assurances for software engineers. Software is intrinsically applied mathematics so we are well placed to reason about the properties of the systems that we build.

Some companies do this already: Altran UK, in their latest air traffic control software for NATS; Siemens Transportation, in their signalling systems for underground railways; Microsoft and Facebook for critical parts of their software development processes. The tools and methods exist and they become more powerful and more usable every year.

Rigorous methods and static analysis tools can be more productive than traditional test-and-fix ways of working because so much of the current costs of software are the result of the time taken to find and correct the errors that developers have made. When these errors are harder to make and easier to find, much time can be saved and the resulting software quality is so much better that it becomes practical to offer warranties rather than hiding behind contract terms that try to deny all responsibility for errors and failures.

Software is already a critical part of our wealth creation and quality of life and we must ensure that it remains so, despite the increasing demands that we must place on our engineering skills if we are to deliver an Internet of Things, autonomous vehicles and robotics without exposing ourselves and society to unacceptable risks. It will take time to make the necessary transition from software development as a craft skill to a profession that fully deserves to be called engineering, but the rewards will be more than worth the effort.

We need a strategy for our profession that will greatly accelerate the progress we are already making, and deliver truly successful IT project management.

About the author
Martyn Thomas CBE FREng is Livery Company Professor of Information Technology at Gresham College and a software engineer specialising in safety and cybersecurity. He has worked in the software industry for over 40 years, as founder Chairman of Praxis plc, as a partner in Deloitte Consulting, and as an independent consultant and expert witness. He is a Director of the Health and Safety Executive and an advisor to several Government departments.
 
Image: iStock.com/jezperklauzen

Comments (1)

Leave Comment
  • 1
    John Sherwood wrote on 2nd Feb 2017

    Whilst Martyn makes some good points about the legacy, the real problem is that cyberspace is an almost infinitely nested system of systems of systems, with no overall design authority and no overall governance. What we are witnessing is that phenomenon of systems engineering known as 'emergence'. Emergent properties of systems are unwanted, unplanned outcomes that are to do with the unexpected interaction of system components. All the components may be working perfectly according to their design, but the system can still exhibit dangerous emergent properties that present vulnerabilities to be exploited. Clifford Stoll's Cuckoo's Egg is one of the earliest examples of this type of system behaviour.

    So, while I sympathise with Martyn's message, fixing things at component level is not going to help all that much. We need some completely new thinking focused on emergence as the characteristic of complex systems that we are trying to eliminate.

    Nope - I don't know how to do it either - but our research should go in that direction.

    Report Comment

Post a comment