One phrase I see written and hear in conferences is the idea that: ‘When a human makes a mistake, one person can learn, when an AI makes a mistake, all future AIs can learn.’
The question this raises is whether this is always true and, if not, under what circumstances it will hold?
Developments in AI are certainly impressive. The recent example of a film script written by an AI certainly pushes the limits of the imagination.
The cynic might point out the lack of creativity in much formulaic film creation, but this is for me at least genuinely ground breaking.
I would argue that for games such as Chess or Go that the phrase above will be true or close enough to be regarded as robust. However, it is important to consider where the limits might be.
On the one hand, it is argued that where machine learning matches or exceeds human performance, which is true in an increasing variety of fields, that it does not mean that it is doing the task in a way that is comparable to a human intelligence.
The central challenge, in my mind, is when we look at an AI’s response to risk. The algorithms behind the Google driverless car made it too cautious. On the other hand once they made it behave more like a human driver, it caused its first accident.
Around 20 years ago, I was waiting to turn right at a T-junction. A car was slowing down and signalling to turn right onto the road that I was on. The car flashed me and the road was clear so I pulled out. The other car hit me, although only at around 5 mph.
It turned out that the driver was French and he thought flashing his lights at me was telling me he had the right of way. I assumed he was letting me know he’d let me out. How should an autonomous vehicle have responded?
Once we are asking an AI to make decisions in a social environment, to what extent can an AI avoid cultural issues?
Would the same algorithms for safe driving work in Paris, Shanghai, Hanoi, Boston and London? There are very different cultural norms at work here.
Last year in a traffic jam in X’ian I saw cars move onto the pavement and dodge pedestrians and cyclists before re-joining the carriageway. I haven’t seen that in the UK as acceptable behaviour.
Different cultural norms also apply in areas such as privacy and security. Could an AI respect the differences and work with them, or would it need to be programmed differently in different locations? How should an AI on a UK tourist’s mobile device behave in Paris or Berlin?
I can’t imagine a world in the foreseeable future where these differences will vanish to a point where a universal algorithm could work in principle. It may suit some tech companies to envisage such a world, but I suspect they will learn some hard lessons.
The more awkward problem is that culture’s change over time.
One example from my early adult life was that the use of the Official Secrets Act against civilians was rejected in a number of high profile cases by juries, even though the law had not changed. The jurors found the heavy hand of the state unacceptable.
The evolution of cultural norms may have generational aspects with both cohort effects and subsequent backlashes. How can we best make use of advances in AI without coming into conflict with desirable shifts in societal acceptability?
Back in the early 80s I had a long conversation with a colleague working in Expert Systems and asked him if it was possible to create an expert physicist and update the capability from 1895 to 1915, the era of the emergence of Quantum Theory and Relativity. I remain unconvinced.
So, for me at least, I think we need to establish a framework within which to debate the optimum use of AI to enhance both the society and economy, rather than either worship or fear it.
Michael Frayn wrote a very funny book, the Tin Men in which he explored the automation of the press, back in 1965. Here the idea was not to replace the printers but the journalists. Invent a headline and the story writes itself. We seem to be close to what was a comedic fantasy.
My final challenge here though is in relation to social intelligence itself. It is one thing for an AI to outperform a human at Chess or Go, but when will an AI be capable of inventing a better game than Chess?