Kevin Warwick, Professor of Cybernetics at the University of Reading, recently spoke to BCS Multimedia Editor, Justin Richards, about his work on cybernetics and artificial intelligence.

Can you tell us a little bit about what you and your team do here at the University of Reading?

We’re interested in artificial inter-robotics bio-medical systems, so basically all things where humans and technology or brain cells and technology come together; we look at the whole system, making it work as though it’s a life form in itself.

Software can amplify human intelligence, but there are limits. What do you think these limits might be?

I don’t know that I do see that there are limits when we look to enhancing human intelligence, as long as that intelligence is linked to the technology. So if I plug my brain into a computer then it’s really looking at the combined intelligence, the artificial intelligence and the human intelligence and it is difficult there to see limits so it’s improving human intelligence, not just with software but potentially with hardware, which is quite a powerful combination.

You’re famous for having had technological implants. How did that come about, and did you achieve your aims?

I have had a couple of implants, myself; the first was a radio frequency identification device (RFID), which was implanted in my left arm. That worked fine; it wasn’t technically a major step forward, but I think, philosophically, at the time it was a shift that I didn’t have a medical problem, but the technology when it was implanted, what it did was identify me to the computer in the building here at Reading, so as I moved around, the computer opened doors for me and switched on lights and opened the lab etc. It said ‘hello’ when I came through the front door. So it worked very well, partly to show what you could do with an intelligent building and how the computer can track a person and have some sort of information about the person; it can do things for them automatically.

The last implant that I had was 100 electrodes called the ‘brain gate’ and that was fired into my nervous system. It involved two hours of surgery, and what I was able to do then with my nervous system, not only linked directly to a computer, but also plugged into the internet, I could have extra sensory input, I could sense the world like a bat; ultrasonic senses, so as I moved around, even with a blindfold on, my brain was receiving pulses that gave me an indication as to how far objects were away.

But one of the most exciting things from the last implant was that my wife also had electrodes implanted in her nervous system. I had electrodes in my nervous system and essentially we communicated telegraphically, nervous system to nervous system. I think that’s where we’re going in the future. It won’t just be speaking to each other, which is a pretty poor way of communicating, it will be communicating via a network, brain-to-brain.

You mentioned communicating through your wife’s implants. How does that manifest itself; how does that feel?

With the implant that I had it took me about six weeks to learn to recognise electrical pulses so literally we were putting in what are called bi-phasic pulses, positive and negative, so you don’t get any build-up of charge in the body; it’s a standard way of stimulating brain cells and neural tissue.

So my brain had to learn to recognise electric pulses and when it had done that, after a month and a half, then I linked up to my wife and she had electrodes in her nervous system so literally what happened was, when she closed her hand, my brain received the pulse, it was stimulated, so I knew every time my wife closed her hand. It’s like a telegraphic, very simple signal, but essentially my brain knew what was happening so it didn’t feel painful or hot or anything like that. 

What is fantastic about your brain is that it can adapt and it adapts to new signals and just like now if somebody is looking at something, their brain is receiving electrical pulses. It’s not painful in any way, the brain understands the pulses, that’s how it sees things, that’s how it hears things and what we have done is to grab hold of some nerve fibres and use them for our purposes, essentially communication. 

The brain is very good at adapting to new things. It means it can perhaps be fooled, so it might have been possible for us to put pulses into my brain for other reasons because there was an object close and my brain would’ve thought that’s my wife sending me signals, but it wasn’t part of what we were trying to do, so when my nervous system was linked up to my wife’s, I knew every time my brain received a pulse, that’s my wife sending me a signal.

What are your thoughts on nano technology and it’s applications for health?

I think nano technology has potential use; the mini robot, or micro robot as we’re hearing about. There is still some way to go practically. The possibilities of digesting them and the robot moving around the body and carrying out an operation, I think there are possibilities, but I think before nano we need to look at the micro level.

If we’re interacting with the brain or the nervous system - neuron brain cells, for example, are typically may be about 10 micro meters in diameter, 10 micro meters - so if we’re trying to communicate with brain cells, pick up signals and stimulate the brain cells then the micro level is fine, we don’t really need to go down to the nano level and it has been found that different materials in the body such as silicon, if you build them at the nano scale, the body can tend to eat it, it can dissolve it and that wouldn’t be very good as a silicon nano robot wouldn’t last very long in the body, whereas a silicon micro robot or a silicon micro piece of technology might last some time. 

So there are all sorts of issues with nano bots, not just mechanically as to what they can do    and how they can move and where they get their energy from, but also the materials they’re made of and I think there’s still a lot that we can do at the micro level when we’re trying to send signals into the body and pick up signals from the body. Maybe micro bots in the body might be a little bit too big, but perhaps we need to investigate that to some extent, before taking the steps to nano bots.

You’ve stated in the past that humans can be improved on through technology. Do you see our interactions with technology as being an integrated part of our evolutionary arc or as a kind of add-on?

I think we can see already that people with disabilities can be helped considerably because of technology and many people now have artificial hips or cochlea implants to help hearing, artificial retinas are starting to come in, heart pace makers, so the list goes on. 

Even before we get to deep brain stimulation, people having electrodes right in the brain, so I think it’s very much there for helping people with disabilities and we can push it forward, but the possibility is there not just to use it for therapy, but to use it for enhancement and I think it is potentially an evolutionary step. 

We’ve moved forward with technology, we’ve seen incredible progress with computers with technology particularly to do with communication, (social networking very recently) and now I think we’re ready for the next step to fuse much more closely because we still have an enormous problem in the interface, getting signals from the brain into a computer, from the computer into the brain.

There’s an interface blockage, a problem there you have to communicate in terms of electro-chemical signals in your brain being converted to mechanical signals, either as pressure waves or as literal movements. Whereas if you can just send signals straight from your brain to your computer we improve the interface and improve the band width of the interface enormously, then that will move things forward, you become one in an integrated and in an evolutionary sense with the technology. So whilst we have a human there and the computer here, there are issues with the interface, but when it’s like that the human and the computer merge. I see that’s the way we’re going; we have to go that way.

Technology quickly becomes outdated, so if you were to insert parts of technology into your body or brain, then you have the problem of having to have stuff taken out of your brain and then put back in every decade; that doesn’t sound very easy to me.

I don’t know that it’s necessary to put pieces of technology into your body. The main thing, if you like, is where the main device is a port, an input/output port, particularly to the brain, which is a bit like an electric plug in a socket, which you can change a little bit, but has been pretty standard for a long, long time. 

So, if you remember Keanu Reeves in The Matrix, he had a big port in the back of his head. I don’t know that we need anything that big in practice, but that sort of thing, the port is that sort of thing, you might not update the port, but you can certainly update the technology which you’re actually communicating with, interacting with and effectively plugging into the brain, not physically into the brain, but via the port; it communicates with it. 

So I don’t know that I see technology improving as a problem, it’s going to improve, it’s going to develop, it’s going to become much more powerful, it’s going to become smaller and so on and so forth, but as long as you have a good portal connection that is bi-directional that’s fine. You might need to upgrade that connection occasionally, but the technology can improve and you benefit from it.

Now you’ve mentioned before in articles about humans foregoing speech and communicating through their minds by way of thought signals, can you explain this in more detail?

Communication, as humans, is critically important to our intelligence. People have compared with other creatures, but the key thing about our intelligence is how we communicate and certainly if you look in the military domain, weapons can advance, but communication is the important thing. 

The same thing applies in the financial sector and with business and so on. Communication is the issue and when we look at communication for humans, it’s still a problem; it’s still in the dark ages if you like. We take highly complex electro chemical signals, which contain thoughts, images, emotions, ideas, feelings, all sorts of things, and if we want to communicate with another human we convert those signals, complex signals in our brain, to a trivial coded pressure wave, which is what I am doing now, which bears almost no relation to the original thoughts.

So, if we can be serious about this, and take a hint from how technology communicates, you don’t convert signals from the computer to a pressure wave and then convert them back again, that would be stupid. What we need to do is to take the signals here, just plug straight into the technology and so ultimately not only will we be able to communicate with the technology, in terms of some sort of language as we do anyway, but in terms of thoughts and feelings and emotions. If you are frustrated that the computer is not responding, it will know that you’re frustrated that it’s not responding. At the moment it doesn’t have a clue that you are frustrated or if you’re happy. But it will know you’re happy, when it’s doing something right and it can learn from that or if you’re angry it will know.

I think the advantages of not only a computer knowing, but another person knowing are enormous. It will increase the way we communicate tremendously; we’ve got to go there. I can’t see how we can stay in this old fashioned way much longer.