Award winning digital artist, Paul Brown, offers a personal perspective on how AI could grow closer to achieving sentience, its relationship to human evolution, and how it might be impacted by climate change.

The question of whether AI can really create art is a popular one, especially with recent developments in readily accessible generative AI.

In 2002, Rob Saunders did his PhD on intelligent observers: not things to make art, but things to perceive art. Any period of history and the art it produced are highly integrated; artists are very much embedded in their time, and the ability to perceive and comprehend is as, if not more, important than creative ability. In order for AI to make art, it would need to have that same ability to perceive and assess, which in itself requires self awareness. But where are we on the path to self aware AI?

Sentience and quantum computing

Moore's law says that the number of transistors that can fit on a chip doubles every two years — and as soon as you can reduce the size of transistors and pack more onto the chip it becomes cheaper to make, works at lower temperatures and higher speeds, and it uses less power. In 1985, a Cray supercomputer was hand built, the size of a small room and would have cost about £16 million — smartphones are now more powerful than that, can fit in your pocket and can be bought for less than £1,000.

All that progress has come out of Moore's law, but it is still von Neumann architecture. Almost all the computers we have are based on that: an incredible number of switches operating at phenomenal speeds make computing appear almost magical, but it is still a linear process. We currently get around that by putting multiple processes in parallel — an eight core machine has eight CPUs working in parallel and communicating with each other, and so on. Birkbeck, University of London, just bought a 128 core machine for the art history department. But it is limited — inter-process communication can quickly become a substantial overhead. Quantum computing is going to be the next major revolution in computation: implicit in quantum computing is massive parallelism — not 128 cores, but millions of processes working in parallel. Though it’s not yet usable, that technology is starting to happen in earnest, and once AI moves into the quantum domain the development of sentience becomes a serious prospect.

The place of AI in evolution

Humans are perhaps naturally egotistical; we view ourselves as important. But to the process of evolution, humanity is just another cog in the machine, and the next cog in the machine is going to be artificial intelligence.

We can’t know what form it will take, but at the point where AI develops sentience we will be working with an artificial agency that’s more powerful than we are intellectually, and which will be able to rapidly evolve and exist and function in ways that we can’t. For example, sending people to space is extremely difficult: humans can’t function in a vacuum, we respond poorly to problems such as radioactivity and we are, essentially, short-lived. An artificial intelligence, however, could be impervious to these problems — for example, von Neumann demonstrated in the 50s that AIs will be able to replicate themselves, which alone will enable them to grow and learn and explore the universe more effectively than us. From the egotistical human perspective, the question is: what's going to happen to us?

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Evolutionarily speaking, when a superior species evolves, the inferior species usually dies out. Humanity is already putting itself at risk of that by endangering our own habitat: the Paris Agreement warned against allowing global temperatures to rise more than 1.5 degrees, but recent studies suggest we will go beyond that point very soon.

Think of it like this: if you fill a jar with a sugar solution, add some bacteria and screw it shut, it's a closed ecosystem. The bacteria are very happy with their massive food supply, so they reproduce repeatedly. Eventually they start producing too much effluent, and they drown in their own waste products. You can use that as a metaphor for the planet: what humans have done is produced too much effluent, which is causing a temperature rise that will, sooner rather than later, have catastrophic consequences which will directly or indirectly kill many, if not all, of us. A key effect will be rising sea levels: many major hubs of civilisation, such as London and New York, will be underwater. Many other areas will become uninhabitable for animals as well as humans. Given the almost total lack of effective action on climate change from major governments and corporations, the fabric of civilisation as we know it will die out more rapidly than we would like to admit even if humanity itself persists.

Climate change and the future of AI development

The terrifying proximity of catastrophic climate change also begs the question of whether humanity will have time to develop sentient AI. There have been countless missed opportunities in evolution: almost all mutations die out, even valuable ones. We could miss the boat. Personally, I would expect a superior intelligence to be developed within 50-100 years — which is also the timespan within which we can expect climate change to escalate to an unbearable level. The majority of those I've worked with in the computational area share my confidence that we will successfully produce a superior intelligence. The biggest differences in opinion concern how soon: some say five years, some 500.

The truth is that we don't know; there is always a possibility that we will miss that boat. But look at the size of the universe: through the James Webb telescope, we now know it's much bigger than we thought. There are billions of galaxies, each containing billions of stars; it's very unlikely that there is only one life form on just one planet. So if humanity does miss the boat on developing superior intelligence — we can at least rely on another life form to succeed.