Robosoldier

Robosoldiers Can software be more ethical than a human? The use of technology in warfare is often controversial, just like war itself, David Evans explores the issues.

Ever since the invention of the crossbow there have been public outcries against weapons technology that was perceived to go beyond what was moral and humane. Yet for soldiers fighting an enemy face to face, the very idea of humane warfare may not sit well with their experiences.

The key question then is how to meet the pragmatic needs of warfare without sacrificing our morality. Principles of war ('jus in bello' - justice in war) have been developed over time through international rules of warfare such as the Geneva Convention. These have broadly stood the test of time, and with each new generation of technology have had to be reinterpreted into a new context.

However, the introduction of new technology usually comes before the interpretation of law, and occasionally with dire consequences. Mustard gas and landmines are two technologies that spring to mind.

It is with these issues in mind that the Royal United Services Institute and the BCS held a seminar on the ethics and legality of autonomous and automated weapons systems. This brought together computer scientists, engineers, lawyers, ethicists and those in the military to share perspectives on a new generation of weapons systems being developed, and in some cases already in use.

Several nations across the globe are in various stages of deploying weapon systems that are in a new class of remote operation. Unmanned combat aerial vehicles (UCAV) have been with us for some time, although we are still coming to terms with the impact of killing someone half a world away while sat at a PC in Nevada sipping coffee.

Israel and South Korea are also deploying border defences that can automatically detect, target and kill people without human intervention when they enter a restricted area. The USA is committed to spending something in the region of a quarter of a trillion dollars to develop unmanned combat vehicles with the intention that they will make up a third of their operational vehicles.

Some of the issues are similar to existing and well-worn ethics-in-warfare debates. What, if any, are the differences between sitting in Nevada operating a UAV, launching a cruise missile, or being in an Apache firing at people who can't even see you are there? What is the difference between an automated machine gun and a landmine? Perhaps it is only a matter of degrees, but are there limits?

One of the speakers made a point about remote operation by showing scenes from a US Army video 'Apache Rules the Night' where a gunner viewing via infra-red clearly identifies a soldier as wounded, but is ordered to 'hit him'.

Under the Geneva Convention a wounded person unable to fight is 'hors de combat' and cannot be attacked, so that killing was probably illegal - a war crime. The same action face to face would have been more obviously dubious. It seems that the more removed a soldier is from their action, the more likely they are to take action that is morally questionable.

When it comes to autonomy, there is a continuum between remote control, through remote operation, to supervised operation and ultimately full autonomy. The latter category is where the landmine sits, it could be argued, as a landmine makes a 'kill decision' based on a simple algorithm along the lines of 'have I been stepped on?'

There is little doubt that we are on a road to more and more automation and autonomy, and that the supposed moral protection of having a human in the loop on decisions may be illusory. If a system identifies and selects targets and then requires an OK/Cancel click from the human operator who has little clue what they are approving, is the human really taking responsibility? To quote the film 'Lost in Space': 'And the monkey flips the switch'.

The surprise of the day came from Ron Arkin, a leading roboticist based at the Georgia Institute of Technology. He suggested that autonomous robots could be more ethical than their human counterparts. The joke is that military training and discipline is all about turning humans into robots, so perhaps using robots to begin with could be better.

The bar for a robot to be more ethical than the average human soldier may not be all that high. Research done by US Army Medical Command on attitudes of soldiers coming back from operations gave worrying results. For example, less than half thought that non-combatants should be treated with respect, and more than a third thought torture was OK if it saves the lives of their comrades.

An autonomous or supervised weapon could have ethical behaviour designed in. For example, it may be able to recognise medical symbols on vehicles, or detect that an individual is wounded or surrendering. Robots can suffer from poor information or recognition just as humans can, but are not encumbered by the psychological aspects of the 'fog of war' that can lead to poor decision making.

Even more than that, an object does not have an automatic right of self defence, so can tolerate a higher level of doubt. A soldier that feels under threat can defend themselves, even if they are in uncertain circumstances.

With the current capability and sophistication of recognition software, this may seem something of an aspiration rather than a reality. Professor Arkin and others openly acknowledge the gaps in capability, but are attempting to bridge them in their research.

Crucially, unless engineers can meet these recognition and ethical requirements, it may well be illegal to make use of an autonomous system. If a system cannot tell the difference between combatants and non-combatants, an aggressor and someone injured or surrendering, and an ambulance or a tank, then the circumstances of use may be severely limited to the point of uselessness. Where the rubber of this argument hits the road is in two places:

Firstly, if you are a commander looking to delegate fire-authority to a system that does not meet these ethical tests, by doing so you may (or may not) be committing a war crime. If in a spirit of inquiry you are willing to test your decision in court, fine, but a duty of care to soldiers in the field should not put them in the role of legal guinea pig.

The second area is in reputational damage. There is little doubt that incidents such as those that occurred in the Abu Ghraib prison and the questions of international law and morality surrounding detention at Guantanamo Bay have done huge damage to the international standing of the United States.

Where there is a question mark and a huge emotional charge around the use of a weapon, it has the potential to self-inflict collateral damage. In the era of 'effects-based' warfare focused on holistic outcomes rather than purely military objectives, thought must be given to what the long term effects truly are.

Finally, the most chilling topic of the day was proliferation. For most nations three things are required to deploy these weapons. The first is the existence of and access to the science and technology. The second is the ability to engineer the system. The final one is the will to use them, which is impeded by law, ethics and public opinion.

As technology marches on, the first two requirements will move from an implied requirement for a multi-billion dollar research capability to a mail order catalogue and skills akin to Airfix model assembly. At that point, all that will stand in the way is the will use them.

There will be those for whom will is a simple matter unencumbered by ethical or reputational issues. There will be those who could manipulate perceptions of another nation's justifiable and legal use to create doubt and confusion to cover their own immoral and illegal actions. There are many examples of this in nuclear proliferation.

Despite clear commitments from the major military powers to develop and use weapons throughout this range of remote to autonomous, there is still much that remains unclear. This event brought together the full range of people needed to start answering these questions for the UK, but it is just a start.

For the military, and for those with family members on operations, this technology is clearly and almost unequivocally beneficial to the UK. The goal is to try and work out what in future we wish we had done now to mitigate against the potential negative side-effects. Ideally, our soldiers, our nation, our standing and our values need to be protected all at the same time.

July 2008

This article first appeared in the May 2008 issue of ITNOW.