Harry Collins, sociologist and author of Artifictional Intelligence, spoke to Justin Richards MBCS about the quest for true AI. Harry gave his thoughts on what is limiting current progress and the necessary shift in focus from the race to create a perfect reproduction of the brain to the importance of the influence of society on the ability to learn.

Please could you provide a brief summary of the book?

The book is about the success of deep learning. It’s about why the success of deep learning still doesn’t mean that artificial intelligence reproduces human intelligence. The reason being that human intelligence is essentially collective. Whereas the builders of artificial intelligent devices are trying to reproduce the human brain. But the human brain is not the locus of human intelligence. It’s human society that is the locus of human intelligence.

What do you think the process of computer socialisation would look like then?

In chapter 7 of the book I have a couple of diagrams. Figure 7.1 on page 132 is a little cartoon of a person called Sam. Sam’s brain, as sketched there, is designed in the way that Ray Kurzweil says the human brain works. And Ray Kurzweil says the human brain is a hierarchical arrangement of pattern recognisers. That’s it’s basic, crucial design. Other people who know about human brains have told me that it’s far too simple and naïve, but I don’t mind.

For argument’s sake I’m ready to accept it. There is feedback up and down the pattern recognisers in the brain and that’s how we come to see things or not see things. And that seems a fair enough description to me - nice and handy and acceptable to people who like thinking about computers. But the trouble is it’s wrong.

The correct model of the brain is like figure 7.2 on page 134 of the book. The difference is that in the correct model there’s one extra level of neurons right at the top of the hierarchy and that extra level of neurons is the neurons of the pattern recognisers in all the brains in all the other people in your social groups who you associate with. So, go back to the first brain, where does the top level get the notion of a cricket bat from? It’s got to have got it from the society you live in and if you live in the wrong kind of society you won’t have a cricket bat to recognise. You’re bound to veto it because you haven’t got something that matches the cricket bat at the top level. And so the whole argument in the book can be summed up in these two diagrams. You’ve got to be embedded and you’ve got to take account of that upper level of neurons if you’re going to understand human intelligence. And you’ve got to understand human intelligence not only as a bottom up recognition process. The top down recognition process comes from society, not from the brain itself and the whole book is really about where artificial intelligence neglects the contribution of society.

You mentioned the singularity in the book. Is the singularity ever going to happen in your opinion?

Well, we could, I presume, allow computers to manufacture other computers that would get cleverer and cleverer, but foreseeably there’s no reason to think that they would become cleverer than us because we don’t know how to build computers that would build other computers that would have the same level of sensitivity for context that we have. We just don’t know how to do it. We don’t know how we do it. How can we know how to build machines that can do it?

People also think that as computers get cleverer we’ll reach the point where we’ll be lucky if the computers treat us as pets or slaves or just wipe us all out. But why should they do that? The idea of very clever people being power mad actually comes from society. In this particular case it comes from James Bond. It’s Spectre and so on and so forth. But there are other human beings who aren’t power mad at all - take the monks of Tibet who live in monasteries and are very calm and contemplative - maybe the very clever computers will turn out like that.

To know whether the singularity is a dangerous thing or not you’ve got to know what kind of society the computers will have. And actually, it’s us who’ll feed in the society. So, if we don’t teach the computers to be power mad and vicious there’s no particular reason to think that they should be.

You mentioned Ray Kurzweil a couple of times. He says that he doesn’t think the Turing test will be passed until at least 2029. Are you thinking that he might have a point there?

He realises that it hasn’t been passed yet and I think he’s dead right and I admire him for realising that it won’t be passed for quite a while. The question is, will it be passed in 2029? I don’t agree that we can foresee a way that it will. If it’s a good enough Turing test.

I have an example that I advise my readers to try out on their mobile phones or computers. The basic idea actually goes all the way back to Terry Winograd. I’m going to write in an English sentence as follows: ‘The trophy would not fit the suitcase because it was too big’ - and then I’m going to get Google Translate to translate it into French. Now I’m going to change one word: ‘The trophy would not fit into the suitcase because it was too small.’

Now, in Google’s translation, ‘it’ refers to the trophy because the trophy is masculine [in French]. However, in the second sentence ‘it’ should now refer to the suitcase, which is feminine [in French]. The AI hasn’t picked up on what’s going on in the world here. Whereas a human being would understand immediately that if you say it’s too small it means the suitcase. If you say it’s too big it means the trophy. This is called the Winograd Schema. So, the test will involve Winograd Schemas.

My book has got various tests like this that show people, ‘Look, don’t be fooled by computers. Here are some tests you can do.’ I call them disenchantment devices; they stop you being enchanted by the apparent intelligence of computers.

Do you think we’re giving AI a bit too much credit at the moment, and the abilities of those working in AI development?

Yeah. I think so. Although, I will say this. In my hanging around members of the AI community I’ve been reassured in finding that the people who are really hyping it are quite a small number. And the majority of people involved in the development of AI see themselves essentially as doing a good engineering job. And don’t get involved with the hype. So, I think it would be wrong to paste the whole AI community with that label. Most of the AI people I know are very modest about its capabilities.

You describe different levels of AI in the book, which is very interesting and takes a while to get your head around. Do you think that AI will traverse to level 2 and beyond?

We’re waiting to get to see level 3, and, as I said to you, it won’t get to level 3. It won’t even get to level 3 by the current means, I don’t think. Somebody has to create some breakthrough and I don’t think that breakthrough will come about unless people understand the problem and understand that the problem is not in the brain but to make social machines.

What was your personal reaction to IBM’s Watson beating the champions of the TV game Jeopardy?

Well, at the time that this happened I wasn’t thinking too much about AI, so I didn’t really notice it to tell you the truth. It’s a pretty impressive achievement. And Kurzweil explains how it’s done. I’ve explained how Kurzweil explains how it’s done in the book. And when you see how it’s done you realise it’s not how humans work. It’s done by brute force. In the same way as Deep Blue beat Kasparov at chess, it’s brute force. So, it’s pretty impressive but it’s not humanlike intelligence.

One of the great ethical conundrums that’s been raised quite a bit is the fact that just because we can do something it doesn’t mean that we should. What are your thoughts on that?

I think a lot of people who are really obsessed with ethics are scoundrels. They haven’t got anything better to do so they get really obsessed with ethics. As a sociologist I now have to jump through so many ethical hurdles in order to do my work. So, I’m fed up with all this ethical stuff and I think it’s over-exaggerated. There are villains about and you need some sort of ethical oversight but there’s no need to turn it into such a huge shibboleth.

Do you have any closing comments on the future of AI?

Artificial intelligence is not about building a brain, it’s about building a society. The sooner people realise that they’ll realise both the deficiencies of artificial intelligence and, if there is going to be a breakthrough, the way and the place to look for it.