In the rush to extend the reach of IT to more and more new fields of application, it seems we may be neglecting to ask some fundamental questions says Don Southey MBCS.

Exciting concepts such as the smart home, or at least certain smart white / brown goods and remotely programmable security systems and home heating, are already extensively marketed, and AI-assisted independent living for the infirm is under pre-market development; while autonomous drones for police and military use, and even robot soldiers in limited roles, are probably a lot nearer realisation than most of us suppose.

But with every new field of operation comes a new attack surface. Smart TVs caused a jolt last summer when it was demonstrated just how easy it was to hack and take control of not only the TV, but the webcam on it. The phenomenon of ratting, spying on unsuspecting victims - is no longer confined to Hollywood films like The Shard.

In late 2013, ABC News reported an incident where a European hacker took over a home baby-cam in Texas. Last November, E&T magazine1 reported on testing carried out by Trustwave on two common home automation gateways; both were found to be highly vulnerable, giving an e-intruder access to control cameras, locks, garage doors and alarm systems, as well as relatively low-key systems like thermostats and lights.

Smart meters, using wireless transmission, offer the potential to be maliciously hacked; anything from denial of supply (or astronomical bills) to the householder, to massive loss of revenue to the supplier, would be achievable.

Even an internet equivalent of the famous Soweto pre-payment card2 would be a major headache. The smart fridge, that can order more perishables for you when they run low, will need to store your debit / credit card details for use over the internet. How well will it secure them?

One of the weakest points with all domestic connected devices is the very fact that they are domestic. Jack and Jill Average simply have neither the awareness nor the know-how to secure their devices. The IT provider must do it all for them, which I see as an ethical necessity.

This is particularly acute for the less digitally literate, for example, most elderly people, who are already a vulnerable group.

However, with some new horizons the stakes are higher still. We have barely begun to consider the possibilities unleashed by the driverless car. The attack surface presented to civil police and military customers with the prospect of deploying autonomous drones, capable of identifying targets and launching stealth attacks without human intervention, should give us real cause for concern.

This is not because the IT hardening is inadequate. Indeed, most organisations and developers in the new IT technologies are acutely aware of the need for cybersecurity. I think we are looking at things from the wrong angle.

If the last 20 years, never mind the Heartbleed exploit on SSL, has taught us anything, it should be that internet-connected (or wireless-connected) systems will, eventually, be hacked by someone. Rather than focusing only on ways to stop or contain that, we should also be looking at what consequences could conceivably be induced when the system is compromised.

With the driverless car, for example, scenarios are not limited to a car crash or the death of a few pedestrians. The possibilities extend, for instance, to the total paralysis of a major city for several days, with every road blocked and citizens fleeing (or rioting) to get food.

But all the conventional IT security techniques are of no use when the attack vector comes from outside the IT space altogether. An obvious point of attack is the scenario-interpretation rulebase. Fool the pattern-recognition and anything is possible.

A light-hearted take on this appears in the film Men In Black, where a rookie recruit is the only one to pass the urban warfare simulation; he ignores the obvious alien monsters and shoots the little schoolgirl, on the grounds that the book she is clutching is Einstein’s General Theory of Relativity.

If drones can be confused by camouflage and dummies, it is not at all inconceivable that an adversary could find ways to implicate civilians as combatants, or schools and hospitals as valid military targets.

Once two or three such computer error incidents got out into the media, the public outrage would make the use of autonomous combat machines politically indefensible, and render the technology unusable.

The attacks on 11 September 2001 should have taught us that often the simplest and most successful attacks come from a direction that is unexpected because it is unthinkable. Every new field of development, then, faces us with a threefold proposition.

  1. because the field is new, we will not be able to predict every possible attack vector;
  2. therefore we cannot devise a security health check to cover every direction of attack;
  3. the only valid field test is extended live use.

The ethical dilemma that faces us is therefore: can I justify unleashing this development, knowing that I do not know the extent of its safety? Have I even come close to imagining the worst that could happen?

Of course, we can argue, the IT profession is not regulated like law or medicine; BCS has a voice but, unfortunately, no real clout. If we refused to work on robot soldiers, someone else will do it.

However, Martin Webster reminded us recently that ‘those who think they have little power can make a difference’3. In the 1970s, prominent researchers in recombinant genetics across the world united in a call for a moratorium on their own research, to give the ethics time to catch up.

With this lead, research on genetic manipulation was widely suspended for several years, even in the teeth of huge commercial pressure. Should we perhaps consider whether it is time to step back and call time out on some of our own new horizons?