Justin Richards reports on a recent BCS Wiltshire Branch meeting he attended that featured guest speaker Professor Alan Winfield from the Bristol Robotics Laboratory.

After a warm welcome and an introduction by BCS Wiltshire Chairman John Cooper, who stated that he felt that Winfield had written the definitive book on robotics, namely: ‘Robotics; A very short introduction’ (Oxford University Press. ISBN 9780199695980), Professor Winfield began his talk by, rather unusually, sharing his concerns about robotics, specifically in four key areas:

Robots displacing jobs - by automating many processes robots are increasingly pushing humans out of the labour force and this trend looks set to continue;

Robots that can pull the trigger - Winfield felt that only humans had the right to decide if another human lived or died during a military scenario, and not a machine, and he has already signed a petition to ban autonomous robot weapons from being developed;

Robots that appear to be intelligent, but are not - robots such as the Actroid from the University of Osaka, while appearing to be nearly human are, in fact, still relatively ‘thick’, and the brightest robot that is currently in existence is probably only as intelligent as a lobster, hence Winfield feels that by pretending the machines are something that they are not, their creators are committing a type of fraud;

Robots that induce an emotional reaction or dependency - Cute robots are becoming quite popular, especially among enthusiasts and, in some countries, with the lonely and vulnerable. While he sees that there might be a case for the use of ‘cute robots’ such as, for example, a robot baby seal, in old people’s nursing homes where residents can’t take their live pets with them, he worries that, off the back of earlier cultural phenomena like Tamagotchi pets, ‘cute robots’ can create more problems than they might solve.

It became clear from these opening remarks that Professor Winfield believed strongly that robots can and may cause unintended physical, psychological, socio-economic and, even, environmental harm. Although he did later defend robots when he said that the media often mis-portrays robots and their creators, by often painting what they can do or might do as an evil ‘take-over’ conspiracy, fuelled by popular culture such as The Terminator films.

Alan then moved on to talk about five ethical principles of robotics that he and fellow robotics researchers, (in this case the UK EPSRC / AHRC), have come up with. These are:

  1. Robots should not be designed solely or primarily to kill or harm humans; the exception here might be in a few cases of national security.
  2. Human and not robots should be responsible for a robot’s actions and therefore robots need to always be designed legally.
  3. Robots are products and therefore should always be created to be both safe and secure.*
  4. Robots are manufactured artefacts, hence their creators need to avoid any kind of deception and it should therefore be always obvious what they are.**
  5. The person with legal responsibility for the robot should be clearly attributed on the robot; for example, robots should be made with ‘robot licence plates’ embedded into them.***

*The field of robotics desperately needs standards and legislation particular to it. For example, drones, which are a kind of robot, need to have more standards and legislation attributed to them.

**Professor Winfield also feels that ‘gendered robots’ are a really bad idea and thinks that ‘sex robots’ are not a good idea either. The existence of such things would, he feels, cause morals to deteriorate further as he thinks ‘having sex with objects’ would further dehumanise humans and human sexuality in general.

***If a robot has responsibility for its actions then it needs to be attributed with ‘personhood’, which opens up a whole different can of worms!

This draft document of standards for robotics is currently open for discussion and can be found on the British Standards Institute’s website. The consultation ends later this month; December 2015.

At this point in proceedings BCS Chair John Cooper suggested that he felt that Professor Winfield was not really a fan of robots and put forward that as his question to begin ‘round one’ of the question and answer sessions. Winfield responded that he was indeed more a fan of the modelling of robots in a laboratory environment rather than of robots operating within actual real-world settings.

Ethical robots

The second half of Professor Winfield’s seminar was entitled ‘Ethical Robots’ and saw him saying how important he felt it was that robot creators instil a sense of consequence in their creations for their forthcoming actions for the sake of those (robot or human) around them.

Winfield commented on science fiction author Isaac Asimov’s ‘Laws of Robotics’ and remarked that they were very good, especially in their mention of the word ‘inaction’ and causing harm to humans by their inaction. He then proceeded to tell the group about some of his laboratory’s own recent research into ethical robots and about their development of what they call a ‘consequence engine’, which is essentially a simulator within a robot’s ‘brain’ that determines all the different results from both a robot’s actions and it’s inactions within a limited ‘rescue scenario’ enacted with another similar robot.

All robots must choose their next action, and this action selection is a very important factor in the creation of an ethical robot. The consequence engine allows a robot to loop through all the different possibilities for its future actions, which kind of acts as a simulator for all the different possible consequences. By applying a safety/ logic layer, which is a pseudo code, they can test out these principles satisfactorily in the laboratory. The consequence engine runs twice per second and therefore has the ability to ‘see’ up to 300 seconds into the future, every half second.

Most recently they have redone their initial experiments with two human-styled robots, which saw the ethical robot save the other robot from a ‘simulated hole’ but only approximately half the time, which means that they seem to have created a pathologically indecisive robot!

Rounding off his talk Professor Winfield posed a number of intriguing questions, including the following ones:

Do we have a moral imperative to try and build ethical robots, if we can?

Why would we build amoral cognitive systems anyway?

Should moral machines have rights of their own?

As you can see for yourselves this is truly an ethical minefield, a kind of ethical Pandora’s Box, if you will. It certainly became clear throughout Alan’s seminar that there is a real need for transparency and clarity about what robots can and can’t do. Professor Winfield is understandably genuinely worried about the unintended consequences of robotics going forwards.

However, on a more optimistic note, he stated that by building robots we continue to learn more about ourselves as a species and he found it gratifying that the robotics community was becoming increasingly serious about their own ethical responsibilities and have now seem to have a heightened awareness of the ongoing ethical situation that they find themselves in. But one major problem remains, and that is that mankind can still program bad robots, as well as good ones!