...or they might. But is it legal, ethical or moral? Important questions to ask if you're the designer, purchaser, commander or soldier deploying one.

The RUSI event that I mentioned in a previous post was indeed fascinating. What was clear to me was that experts from different sides of the subject (of which there are many and varied) had not previously met and discussed the issues in any detail. Very exciting!

Noel Sharkey, one of those behind the TV series 'robot wars', managed to generate a lot of press coverage - and it is not every day that The Telegraph front page and The Register have so much in common, even if their take was slightly different. The really annoying thing of course was that none of this coverage mentioned how roundly sensible and heroic the BCS was for helping initiate this debate. Never mind...

Ron Arkin gave, for me, some of the most unexpected and interesting insights. In a technical report, he surprisingly (at least to me) suggests that a robot soldier could potentially be more ethical than a human one. That certainly made me do a double-take. His theory is very logical, but he admits that it is predicated on a few things that are as-yet unproven.

What we do know is that, thanks to the average human soldier, the ethical bar is not that high. For example, less than half think that non-combatants should be treated with respect, and more than a third think torture is ok if it saves the lives of their buddies.

There are also some key differences between a person and a robot. A person has the right to defend themselves, but a robot does not (as yet, and don't get me started on that one!). That means the robot can endure a level of 'personal' risk before engaging a target that a human may not. There are several other attributes that similarly could make a robot a more humane soldier, as outlined by Ron Arkin in his presentation/paper.

Ron's hypothesis is that a robot could have an 'ethical governor' that checks all actions against programmed rules of engagement based on international law. In fact, it may well be that an autonomous robot programmed to deliberately take life without such a governor would be de facto illegal under the Geneva Convention. Whether such a governor could be built is the question, and he is approaching it in a very empirical manner by trying to build one!

Of course, technology currently in use by UK forces are very much tied in to human decisions. However, that may well change. Autonomy is a continuum, so it is very much creeping up on us. It's coming by design, as well. The stated policy of the US Department of Defence is to progress towards autonomous systems capable of making kill-decisions.

There is a huge amount more to say, and much more to discuss... but not right now. RUSI will be writing up a full report on it, and I will also try to produce a full article in the next few weeks to help with any insomnia you may be experiencing.

In the meantime, the last thing I wanted to mention was a question that keeps coming up on this topic - why is the BCS interested in this?

The answer is that ethics is perhaps one of the most important issues for the BCS. In this case, it is software engineers and computer scientists who will be instrumental in turning this from fantasy into reality. Many of the debates need to take place in the political/military/legal domains, but the results will have an impact on the social conscience of our community.

Scientists and engineers like Ron and Noel are also well placed to predict what is possible with technology, and what kinds of issues may arise. In some ways, this is our Manhattan Project, and the echoes of that are still being felt by the Physics community...