For the first in a series of interviews with key figures in the IT industry for BCS's 50th anniversary, Brian Runciman spoke to Vint Cerf, Google's chief web evangelist. This interview also appears in the ebook Leaders in Computing.

What developments in computing do you think were the most groundbreaking or memorable in the last 50 years?

We've gone from simple switching in the phone system to tubes to transistors to ICs and that sequence has had a profound effect. It's produced powerful, small devices using very little power.

Sometimes people think Moore's Law has been outgrown and pursue new technology like gallium arsenide, but we keep finding yet more ways to tweak CMOS technology to make it run faster with less power, and its potential isn't exhausted yet. We will eventually run out of capacity for that technology of course.

An interesting example with regard to capacity: In 1979 I got some rotating magnetic memory, 10 MB for $1,000, but recently I bought a terabyte for the same cost. To buy that in 1979 would have cost $100 million. I can assure you that I didn't have that kind of money then and, if I had, my wife wouldn't have let me spend it on a terabyte of memory anyway.

The other dramatic change is in widespread high capacity networks. Computers used to be standalone, but today we have computers in our pockets, embedded in cars, in the house and so on. The internet now has 400 million machines connected to it - not including laptops, PDAs, and the 2.5 billion internet-enabled mobile phones.

So we have, conservatively, two billion devices on the net but only one billion users. If we extrapolate that, we will soon have billions of devices interacting with an unpredictable effect on the network because they'll need to communicate with each other more directly than they do now.

What recent developments by others have impressed you most?

Other than using radio as an alternative to hard wiring, I think it's been very high speed broadband delivered via optical cables or digital subscriber loops or cable that is available at consumer prices.

With regard to computers, one of the most interesting developments is dual core processors, because that makes it possible for one processor to watch another in a single computer. This is not just an issue of doubling its computing capacity but one processor could alarm if it detects anomalies in the other, producing a security effect.

Another important area of progress is that we now have a heightened understanding of the threat bugs in the digital online environment. The fact is that we can't write bug-less code and don't even know how to predict the number of bugs there may be in a certain number of lines of code and test for them. This is a high risk area for networks.

We've even taken language from biology to analogize threats - worms and viruses - and also mythology - Trojan horses. But we've not done a great job of responding to these security vulnerabilities.

It's not suitable to take one approach and assume it will work. For instance, we could encrypt everything at internet level, but then a virus in an email would be encrypted before it was sent and then decrypted at the end, intact, and the virus will still do its damage. Security is a many-layered problem in IT architecture.

I'm disappointed by our inability to produce more effective software, it certainly hasn't kept up with hardware development. Even things like Python and Java are a two- to five-, at best 10-fold, improvement in code production.

For example, two laptops could communicate with each other and a server and we won't know the configurations of the machines and the software versions they are running, so it's like we are running an experiment every time we connect. Sometimes it fails.

Another security issue is physical. Someone can easily bypass firewalls by simply walking into an office with a memory stick. It's a very fragile world, very dependent on software working correctly.

We should be asking the research community, and BCS can help here, to do a better job of improving software productivity and establish disciplines to improve the end product.

Why isn't software as reliable as hardware? It's down to complexity. Hardware is constrained because although registers can have large values the transformations are limited by the instructions of the computer. However, software programmes are vastly more complex.

BCS is pursuing professionalism in IT - what are your thoughts on this?

It's a very laudable position. The mechanical engineers here in the US are licensed, so if a bridge falls down they accept liability - their reputations are at stake too, of course. Something similar is needed in software, although not necessarily professional liability because of its complexity as we mentioned earlier.

In mechanical engineering, software is used to model and test designs but in software we don't have empirical testing tools. However, software engineers should still be asking 'what can I do to ensure this works?'

I'm not sure how it would work, but taking responsibility is what it's about.

The UK has a problem attracting students onto IT courses often due to a geeky image. What should the IT industry be doing to improve this?

It's interesting that you should ask because I was at a lunch meeting a few months ago, hosted by Cisco CEO John Chambers, where Tony Blair was talking about this. We have the same problem here in science generally.

I don't think we celebrate successful scientists and engineers enough. Children are natural scientists, they explore, experiment, try things out, but our rote learning approach seems to drive this out of them.

We need to rethink how we teach and celebrate science and technology and make successful scientists and engineers more visible to the public. We often don't know who invented so many things. This is not just about PR but dedication in the country in finding ways to make these subjects interesting and important.

I can give you a personal example. In 1957 the Soviet Union got Sputnik into orbit, which provoked a massive reaction here in the US. Eisenhower asked how they got there first. This galvanized the country, launching the Advanced Research Projects Agency and NASA.

The National Science Foundation started a huge educational programme for kindergarten to 12th grade students - I was affected directly. I thought I was going to be a nuclear physicist for a while, but I hit a brick wall with some of Einstein's stuff and ended up in computing.

So in 2006, what's our Sputnik?

I think it could be global warming. It's a threat to literally hundreds of millions of lives and it could happen rapidly. Ice sheets the size of Rhode Island are breaking off Antarctica. In Greenland the permafrost is thawing. The glaciers in the Himalayas that supply fresh water to 2.5 billion people in China and India are threatened.

We need to respond to this like we did to Sputnik, with a massive programme rooted in science and technology. We could address the use of petrol in cars, look at alternative power generation and so on. In France, for instance, 78 per cent of their power is generated by nuclear power stations, but the US hasn't built one for decades.

If we don't do this the increased heat will release CO2 from the oceans, which will exacerbate the greenhouse situation, producing more heat, and we'll end up in a vicious circle.

Looking back on your long career in IT, is there anything you would do differently given the chance?

With the design of the internet, there are several things I would change if I could. The 32 bit design was insufficient. We are now addressing this with the 128 bit address system, which will last till I'm dead, then it's somebody else's problem.

Authenticity as a principle would have been nice from the outset of the design. In 1978 public key cryptography wasn't generally available but that would have helped.

As industry and business took to the internet so quickly, much clearer notions for virtual private networks would also have helped. And with the mobile internet world being so popular we could have done a better job of enabling mobile access.

Does the web still need evangelists? How do you see your role?

Very much yes! In the beginning I asked for the title Archduke, but someone pointed out to me that the last one was assassinated, so perhaps it's just as well that it didn't fit with Google's approach!

But we definitely need net evangelism. Even though there are a billion web users, that means that there are still many billions that aren't online. We have a long way to go for everyone to have access.

BCS has a disability specialist group. Wikipedia said you suffer from a hearing problem. What should the industry be doing to improve social inclusion?

There are many kinds of disability: physical, cognitive, many levels of visual, hearing and motor impairment, so no single trick will do it. To run a web page in audible form, for example, is very difficult as they are designed to be visual. Audio is serial, vision is parallel, so what should an audio output read first, and then next?

We need to re-examine the design of information systems and develop standards so that tools can help us create fully accessible content. At the beginning of the web, when people first put magazines online they presented them like they appear in print, but it just didn't work - that illustrates the problem, because ways of using information differ so much.

To address this we need a deeper understanding of the full range of impairments and a very deep understanding of how graphical user interfaces work. The Bobby standard is a good example of something going in the right direction.

Which past discovery would you have liked to have made yourself?

That's a very interesting question. I don't really have discovery envy, but I was quite impressed with the man who figured out how short-term memory works and what changes happen in the brain with the passage of ions in and out of the cell membrane, which makes a permanent change in the configuration of the surrounding molecules.

He said to me that he realized what was happening at three o'clock in the morning and was amazed to think that he was the only person in the world at that point who knew this. I wish I could say that. We don't really understand the brain and I'd like to know what the mind is.

Who in the IT industry inspired you or was a role model for you?

It's no one person. It was more a question of being immersed in an environment where science and technology was interesting and important.

We need to distinguish between inspiration and encouragement. I had endless encouragement from teachers. Once I complained to my fifth grade teacher that maths was too boring; he responded by giving me a seventh grade textbook to work through. I was very grateful for his willingness to challenge me.

You're called the father of the internet, Sir Tim Berners-Lee the father of the web - so what's your relationship to the web? Father-in-law? Uncle?

That's almost like saying that in academic circles students are children of their professors and their students are grandchildren.

The web is one application of the internet. A rough analogy would be to say that Bob Kahn and I have built a road system and Tim and his colleagues have built the cars.

Both have contributed to the infrastructure at different layers of a multi-layered cake.

Saying 'father' is not crediting others for their work - like Larry Roberts' and Bob Taylor's work on ARPAnet. There's lots of credit to go round. I wish that more of the people that had worked so hard had had more credit.

What's your favourite website, other than the obvious?

It is the obvious - not because I work at Google but because it's the tool I, and many others, use. I continue to be really amazed by the amount of information on the net. It's proof that people get satisfaction from knowing that others have got benefited from information they’ve posted.

And the fact that some have monetized it interests me, because that allows for sustainability. Some of my favourite websites are the Smithsonian website, Nasa and, as I'm a science fiction fan, I download e-books too.

What are the future problems for the net?

Imagine it's 3000 AD and when surfing you come across a PowerPoint 97 file. Does Windows 3000 know how to interpret this? This is not having a go at Microsoft, because a lot of software support is retired.

So what can we do with unsupported formats? How do we get permission to run software on the internet? What if we need a different operating system to run it? We have an intellectual property challenge to preserve the interpretability of the bits themselves and we have the problem of the digital storage medium.

What advice would you give to a budding IT professional to get the best out of their career?

There are enormous research problems to be addressed, then lots of general things for designing applications for these systems. The thing I found most attractive about software development is that you can create your own universe. Software is a malleable clay to create and invent.

Is the phrase 'a career in IT' the correct formulation? What drove me was making computers and remote computers do things via programs or a network. It was fascinating to me to know that you could invent things that have a consequence for other machines or tens of millions of people.

Creative people are generally not driven by a desire to change the world, but are intrigued by a particular problem - what if I did this? what if I connected these? It's a curiosity itch.

If you think IT is the career for you, you need to ask 'what is it I love to do?' Interest should be the prime driver.

I think software has the highest potential for a career because it's an endless frontier for invention and we need an increasing amount - particularly for mobile devices. It is also the most difficult area - who was it that said 'opportunity lies on the edge of chaos'? (Mr. Spock? Ed)

What is Google getting by buying You Tube? Silly home movies or something web-changing?

The internet is shifting video production from the entertainment industry to consumers, but the quality varies greatly. There are more editing tools now, though, so that will improve.

For Google the You Tube acquisition is an opportunity because it's a medium with a substantial clientele. Advertising revenue will work well in this medium and that will grow our footprint, but it was also a defensive move to keep competitors at bay.

Do you think development of the semantic web is the next big thing for the net?

This is a conundrum. Sir Tim is very eloquent on the utility of semantic tags and I agree with him. Google could do a better job of presenting relevant search results with them, but where will the tags come from? What vocabularies shall we use? Who supplies the tags and will it be done manually, which has a scaling problem, or automatically? Hopefully the latter.

Today HTML and XHTML is usually generated automatically so we can imagine the same with semantic tags. But we don’t have them right now.

Meta-tagging has been abused by web page publishers to draw people to sites under false pretences, which has meant that some search engines ignore meta-tags. This is about getting authenticity into the web - digital signatures would be a partial help there.

Wikipedia mentions a rumour that the term 'surfing the net' originated from the first data sent across the network by you. Is that true?

It is not true. It comes from the idea of surfing an ocean of information.

Back in 1989 the San Diego academic community CERFnet was going to be called SURFnet but a company in the Netherlands already had the name, so they couldn't use it. They called me to ask if I minded if they could use CERF for the California Educational Research Foundation. I wondered whether if they messed up it could be embarrassing but parents name their kids after people, so I thought 'why not?'

The epilogue to that is that the company was eventually acquired by AT and T and when I asked if I could have my name back they said no, although a little more strongly than that.

Perhaps now I'm at Google they'll say yes, or maybe just quote me a big number.

As if the internet weren't big enough I believe you’re involved with the interplanetary internet?

This has gone very well. We started this in 1998 with engineers from the Jet Propulsion Laboratory so we could support any space exploration going on around the world, not just US projects. At the moment space communication is done via point to point links and is very slow, so we wanted to introduce the flexibility of an internet-style approach.

However, the internet's standard protocols are not a good tool for interplanetary communication - when you consider that it takes a signal, even at the speed of light, 20 minutes to get to Mars, for example, and then another 20 minutes back. You could go to lunch after clicking a link.

We can use standard internet on the spacecraft and planetary surfaces, but we have had to design a new set of protocols to link these independent internets together across interplanetary distances.

NASA is now planning a new architecture for deep space communications with new standards to get interoperability between old and new mission assets, with the interplanetary internet underpinning it. That means we can accrete an interplanetary backbone.

Thank you very much for speaking to us.

Quick Q's

Open source (OS) or proprietary?

Generally open source - it doesn't necessarily mean free, it's just that you can see the content of the software. I will say that OS is not the solution to everything if it isn’t being maintained. It needs a support mechanism.

Blackberry or PDA?

I'm using a Blackberry, although I consider it to be a PDA. I also have a mobile phone.

The latest PDAs have very high-resolution screens but the type is very small - maybe anyone over 26 needs a magnifying glass. It does lead one to think about voice interfaces.

Apple or PC?

Both. I have Macs at home and an IBM ThinkPad - although I am a fan of Macs.