Hello and welcome to this BCS, The Chartered Institute for IT interview. Today I’m with Grady Booch, the Chief Scientist for Software Engineering at IBM research.

Later on this evening, you are giving this year’s Lovelace Lecture. Could I ask you what you will be talking about?

Well, in my professional journey, I’ve been moving from software architecture and software engineering, to opening the curtain of computing to the general public. So the lecture you are going to hear tonight is where that journey has taken me. My wife and I have been developing, for the last five years, a major documentary on computing and the human experience. One of the themes that we’re attending to is the end consequences of computing, so where does it lead us relative to computing and humanity, and even asking the question: ‘Is the mind computable?’. If we get to that point it has enormous consequences for us, personally and societally.

You gave the Turing Lecture in 2007 and you spoke to the BCS then. Obviously software has moved on quite significantly since then, so what are some of the biggest changes that you have seen over these six years?

Well I’m going to question your premise: Has software really moved on that far? The way I would characterise it is that software has woven its ways into the interstitial spaces of the world far more than it has in the last six years; of this I would agree. I would also observe that we’ve seen far more in the last six years - The internet of things and the movement of software to the edges of society. Now that we have the explosion of mobile devices - your tablet, my phone, all these things that push software edges out to the individual human - we’re seeing the increasing movement of software out in society. That’s the biggest change perhaps in the last six years.

What has not changed, I think, is that the art of creating software is still very difficult. It’s also that the software always, although more so now, moves its way into virtually every part of society. I think that the difference, perhaps in those six years, is of a more intimate computing than it used to be. There’s a phrase one of my colleagues has used: We’re now moving from an era of corporate computing and personal computing to an age of ‘ambient computing’ where software that surrounds us is very much in the atmosphere.

So has the take-up of app surprised you in any way?

Very little surprises me in this business. Some people often ask me ‘Is this a revolution?’, but I’ve always viewed it to be an evolution. As an engineering process, we build systems that respond to societal, technical and economic forces around us and what’s happened, I think, is that we now have these forces that have led us to a particular place where software and hardware are very much in the atmosphere.

You’re the chief scientist for IBM Research. What sort of things do you work on as a day-to-day basis?

Well, there’s no day that’s typical, but there are three things that I tend to spend my time on. First, on the documentary - IBM gives me tremendous degrees of freedom to be able to pursue that, and it makes sense for both IBM and society at large because we are trying to open the curtain of computing upon society, and begin to have a dialogue, as an insider, trying to open up what we know about the beauty and fragility of software intensive systems to the general public. That’s one of the things that I work on. I also continue to work in the area of software engineering and software architecture, where I will work with customers and, in particular, on trying to engineer very large software intensive systems. In these days, it’s more so ‘how does one transform existing systems to embrace innovative new things?’ The moment you write a line of code it becomes legacy, so this becomes a problem, not just to the large existing banks, but for the Facebooks and Googles of the world. Even those organisations begin to have legacy issues.

The third space where I’m spending my time is in the area of cognitive computing. I had the delightful opportunity to work with the Watson team a few years ago. As they had just finished their competition, I was asked to come in and codify their architecture, taking what I know from that space and looking at their million-plus lines of code, capturing it in a way so that as it was passed over to the commercialisation teams they had a blueprint with which to move on.

My work has further continued in the cognitive space because we have developed a non-Von-Newman machine that is massively parallel - we’re talking a number of devices mirroring the number of neurones in the human brain. So the question there is ‘how does one architect a mind?’, and that is the question I am pursuing. No small problems.

No, that’s rather a big problem. One of the biggest things in computing is the Turing Test, and whether computers, robots, and AI can ever be that convincing. You’ve spoken on it in the past, but have your feelings changed as to if it could ever be possible?

I believe very much so. I’m a reductionist, in the sense that I believe there is no magic that happens up here [gestures to head], and yet we still don’t quite understand the mechanisms that make that work. Now, this is indeed a theme of my lecture tonight. I do respect that there are alternate views - my wife, particularly, takes the contrarian view in that there is something special. Sir Roger Penrose takes a similar view; he believes that there is something happening down at the quantum level, but everything in my experience tells me that there is no magic, and the mind is indeed computable. I’m not of the ilk of Ray Kurzweil, however, who believes that we’re just around the corner. I think we are still a generation or two away from really understanding that.

So it’s not going to happen in our lifetimes?

Unless I can prove to you that i’m actually a machine, talking to you.

You are very convincing.

I am, I’ve passed the Turing Test.

I’m not sure I’d be the benchmark, but there we go! Obviously as you’ve said, software is going into every aspect of our daily lives - you’ve talked, in the past, about software that’s helped with your medical history, and obviously the software that’s operating planes etc. Do you think that with this spread of software there should be a greater focus on security? Or is there enough of a focus?

There’s a phrase I sometimes use: If builders built buildings the way that software engineers build software, the first woodpecker to come along would destroy civilisation. Software is exquisitely fragile, and the difference, in the past few years, is that we’ve moved software intensive systems into places where individual lives depend upon them, and so the risks are far higher. So you ask, ‘Do we need to turn up the dial with regards to our worry about security?’, well obviously, because as the risk increases, we also need to attend to increasing security. When we speak of security, we like to use that in a very broad sense, so it’s not just ‘we don’t want software to break’ - if I am flying a plane it would be very bad for one of these software intensive planes to fall out of the sky - but it also introduces vulnerabilities that we, as humans in the loop, help introduce because you have the public who don’t understand the workings of these things, and so we as software engineers have to build things that become relatively idiot proof. We have a responsibility to build systems that are difficult to break, but that has not been an economic concern in the past; it is so now.

Equally, there is the balancing out with making software easy and intuitive to use.

It’s one of these engineering tensions that we spoke of. We want things to be incredibly functional, simple, to cost nothing, to be secure and to be simple, and yet we are the master illusionists because we’re building these systems with hundreds of millions of lines of code below the surface, and there is intrinsic complexity that one simply cannot escape.

Apart from that, what other challenges do you see for the software development industry?

Three come to mind. Firstly, there are hardly enough people to build the software systems we’d like to have. No matter what kind of future you envisage, it relies on software that has not yet been written. I don’t know about the situation in England, but certainly in the US we are not graduating enough students to build the software intensive systems that we anticipate needing. This is particularly true of women and minorities in computing. As you look around a class, or even organisations in industry, you will find that there are far fewer women in our field than there are in comparable engineering fields, so that’s something that we certainly need to attend to, and that’s going to impact future generations.

The other area is that there are economic costs that we are just beginning to understand - I don’t know the exact figure, but if you look at a Google search, take a cup of tea and boil it, the energy cost for boiling a single cup of tea is about equivalent to a single Google search. Now multiply that by the billions upon billions of searches that go on, and this is really energy. So for the individuals that do these searches, its free, bur the aggregate costs are indeed very expensive. Someone told me that Google’s data farms consume as much energy as a certain small country. So these issues are non-inconsequential by any means, and that’s an economic implication we have to deal with.

The third is an individual issue. Sherry Turkle speaks about it in her book Alone Together. As we, as both individuals and society, continuously surrender ourselves to computing it raises the question, ‘what does it mean to be human?’ I live in Maui and invariably we’ll see a beautiful sunset, whales dancing, and to the side a family buried in their iPhones getting their iTans. What have they done with their lives? They have chosen to bury themselves in this technology, as opposed to participating in the world. That’s a change that software has led us to, and in the coming era we need to come to grips with that.

That sounds like a real negative.

It’s not necessarily a negative. One could have said the same thing about the introduction of the car, the radio or for that matter the book. The church was not very happy about the introduction of Guttenberg’s press by any means, and if you had looked at the things that the pope was ‘tweeting’ back then you would have seen ‘This is evil’ or ‘This is the spawn of Satan’, and people are now saying the same things about our iPhones. I think the reality is, or at least my opinion is, that I have great confidence in the resilience of the human spirit, and we will find a way. We will change and the process will change computing along the way. It’s not a negative, but a reality we have to cope with.

As you’ve said, all new technology is getting embedded in human society.

Yes, and there is no law that’s going to get rid of it. It’s inevitable and irreversible, unless you’re in a certain country where you try to ban these things, but the problem is it’s really difficult to do so. Look at the results of the Arab Spring, for example, where you find nations trying to ban technology, but it leaks itself out into the world.

You’ve touched on the lack of people coming into computer science and programming, and this is an issue in the UK as well. Have you heard of the Raspberry Pi?

Yes, it’s wonderful, it’s awesome and it rocks! I remember when I was growing up I built my first computer at the age of 12, from little individual parts, and later on, but prior to the personal computer there were lots of devices like this that you could just play with. One of the challenges now is that if I look at a device like your tablet, there is nothing behind the screen that I can fiddle with, so the introduction of things like the Raspberry Pi gives you an opportunity to play and hack and try things. So I am delighted that the RP exists because it gives the new generation an opportunity to play, and one should be playful in the presence of this technology.

You are involved heavily in the arts - you play various musical instruments. You talk of programming as an art, but you are a scientist. So is programming truly an art?

It’s truly both, as with any engineering topic. If I look at this building, for example, there is a particular architectural style associated with it, and the architect who designed it would have had a range of possibilities - it could have had a modern French bordello look, or maybe there could be Grecian urns all over the place. So there is always, in the presence of engineering, the opportunity for degrees of freedom like that. The same is true of software; there is a irreducible kernel of complexity in every software intensive system that demands that kind of innovation. It is inescapable. There is still a lot of software that is purely mechanistic, and one must have the technical chops to carry that out, but it really is the two that play together.

You’ve said you are a big proponent of all things Apple. Is that still the case or have you mixed your ecosystem up at all?

I have some Linux boxes now.

What’s next for you, aside from building the human brain out of software?

That’ll keep me busy for the rest of the week, at least, but I’m not sure what will happen after that. Clearly though, the work we are doing on the documentary is going to consume me for a while. This is a journey that my wife and I started about five years ago, in conjunction with the computer history museum. Really, it’s a global story - we hope to tell the elements from the UK that have changed computing today. We have some great connections with the good folks at Bletchley Park, and it’s not just that, but there are many other stories that come from here that will be in it. In addition to the five one-hour episodes we are planning, we have a book series, an app, a website, basically trying to open it up for the general public. That’ll consume me for a while, that’ll take care of the following week.

You also mentioned the internet of things. That’s going to take computing out even further into the general society, so what are your feelings on that idea?

I’ll address it from a societal and a technical perspective. From a technical perspective it produces some interesting challenges because now we’re talking about billions upon billions of low power devices that have little bits of software in them. Programming them is like working in an island that’s connected with lots of other landscapes. So the problem in developing them is that they must be built in the presence of a larger ecosystem because by themselves they’re not that interesting, but where they fit in the ecosystem that makes sense. We’re going to see a class of developers, really already emerging, who have the skills to build very, very small devices that have both memory and power constraints. These are skills quite different to those of the average web developer.

From the societal perspective it really goes back to the notion of ambient computing, that these devices will be all around us, they’ll be in our doorknobs or individual lights. In fact there exists a line of light bulbs that are ‘internet ready’, each individually addressable. This is a reason we have moved from IPv4 to IPv6 in addressing because when the internet was first devised, nobody ever imagined we would have so many devices, but now we have as many as grains of sand in the earth. This will last us for a while.

Off the back of that, all of these devices will be creating data. How do we deal with the relevant issues: Who has the data? How do we keep track of the statistics emanating from our digital toasters? We talk about big data now - terabytes and petabytes, but it will surely grow even bigger, so how do we deal with this?

An American Aphorism: 80 per cent of everything is crap, including this statement. So even though we’re generating lots of data, a lot of it is noise. The NSA may care about when I use my toaster because they find patterns in my life that are important to them, but, by and large, it’s those small things that do generate data, but may not be that interesting for us. Still, it is the case that we are in an era where we are generating lots and lots of information, and the challenge is not just crunching it, but being wise about the information we select. That’s a challenge of big data analytics these days.

The world has always been complex. The societal implication is ‘what does this mean for our privacy?’ One could make the case - I do sometimes - that the privacy we have experienced in our generation has been an aberration. If you look at the kinds of privacy one had in a small town back in the Middle Ages, really for most of the duration of human life we have had very little privacy. Your neighbours would know what you were doing and could hear you through the windows; you would live in large groups with little privacy. It’s really only been in the last generation or two that we’ve had extreme privacy like we have now. So that may be an aberration.

It’s also the case that there are degrees to which an individual deserves privacy, and yet we are at the point now, because of ambient computing and the generation of shedding off so much information, there is this friction on what the human individual desires and what we broadcast to the world. We will work it out, but now is a period of friction between the two.

Some say we should move towards transparency - the data’s out there and you know who does what with it.

Perfect transparency would be wonderful, but governments tend to prefer asymmetric transparency, meaning ‘we want you to be transparent, but we won’t be’. It’s also the case that transparency is not always good. For example, my wife and I volunteer to watch over monk seals in Maui, a highly endangered species - within all of the world there are maybe 1,400 of them - we know, through public agencies, where certain seals are in a moment in time - whether they are giving birth or they are malting, that’s visible information - and yet for protecting the seal there is value in hiding the information, so they can go off and do what they need to do as opposed to having lots of individuals go around them and bother them. So you can make the case that 100 per cent transparency doesn’t work; and you have to find the right middle ground.

Watch the video