Global IT entrepreneur, workplace revolutionary, philanthropist, and first female former President of BCS Dame Stephanie Shirley CH, DBE, FREng, FBCS reflects on her time in the industry and considers the origins and implications of Artificial Intelligence.

Age brings few benefits, but one is perspective. I have seen a lot in my time.

I can’t claim to have been a pioneer of computing like Alan Turing, but I was a pioneer of the computing industry. My fascination with computers dates back to 1954, the year Turing died. I was then a lowly scientific assistant at the Post Office Research Station at Dollis Hill. It might sound rather humdrum to now, but at the time it was at the cutting edge of technological development.

One summer, I spent my annual leave working unpaid at a rival research centre run by the General Electric Company, where they were developing an early computer called the HEC4. It was a huge multi-part machine that to the modern eye would look more like a fitted kitchen than a forerunner of the personal computer, but I sensed it had immense potential if properly programmed. In this I was unconsciously echoing Turing and his conceptualisation of the computer as a universal machine capable of tackling any task.

It is vital to remember and to cherish the lives of people who are different, who are marginalised by society…As Turing himself said: “Sometimes it is the people no-one can imagine anything of, who do the things no one can imagine.”

In due course I struck out on my own, founding a computer programming company which, with great imagination, I called Freelance Programmers. Eventually I became one of Britain’s wealthiest women, and subsequently a venture philanthropist dedicated to learning how to give money away wisely.

The human foundations of Artificial Intelligence

Back in 1950 Turing famously said: “We can only see a short distance ahead, but we can see plenty that needs to be done.”

Today we may perhaps be able to see further: we certainly have a better idea of what the enormous field he helped to found may be capable of. But, unquestionably, there is still much to be done.

Turing laid many of the foundations of today’s technology-driven world. Friends, colleagues and teachers described him, often in tones of wonderment and awe, as a genius. He reputedly had an IQ of 185; he was mathematician, a computer scientist, a logician, a cryptographer and a philosopher as well as a theoretical biologist. He was a genuine polymath.

Like many geniuses Turing focused on ideas, whether they were obviously useful or not - but he also had a strong interest in applying those ideas in practical innovation. Though he was comfortable with his own company, he also valued team work, especially during the codebreaking for which he is mainly remembered today - through which he helped break the Nazi Enigma code, giving the Allies an edge in World War Two and laying the foundations for the modern computer. Today, I sponsor a project at the National Museum of Computing at Bletchley Park: Turing worked with Gordon Welchman, the head of Hut 6, there. That hut is still there - and still just a hut.

Geniuses are reputed to be highly adaptable, insatiably curious and open-minded – as well as, in many cases, being mildly eccentric. They know what they don’t know. They have a high degree of self-control. Turing fit the bill for all these characteristics.

Two others often associated with genius, being funny and sensitive to other people’s experiences, he definitely did not have. What he did possess, however - intellectually if not intuitively – was a grasp of the importance of empathy. What is the Turing Test, after all, if not a test of a machine’s capacity for empathy - of its ability to put itself into the mind of a human being?

The gift of human uniqueness

Artificial Intelligence was a theoretical concept when Turing first articulated this famous test. Today, AI has entered the mainstream; it has applications in fields as diverse as health and data science, the military and gaming, autonomous vehicles and social media. However, though the very definition of AI is intelligence demonstrated by machines as opposed to the natural intelligence of animals (including humans), even today there is no obvious route to creating machines that can participate in human culture sufficiently deeply to pass the Turing Test.

Of course, Turing is also remembered for his presumed suicide following chemical castration intended to ‘cure’ his homosexuality. Today we recoil in horror at his treatment. Although he received a Royal pardon in 2013, let me remind you that 69 out of the 195 countries of the world still consider homosexuality a criminal offence.

It is vital to remember and to cherish the lives of people who are different, who are marginalised by society - which may sometimes include the highly gifted. Failure to develop the talents of gifted children deprives us of future innovators, creative thinkers, leaders and outstanding performers. As Turing himself said: ‘Sometimes it is the people no-one can imagine anything of, who do the things no one can imagine.’

What role does Artificial Intelligence in our future?

What stage have we reached in the development of Artificial Intelligence? Where might it go next? What are the opportunities – and what are the potential pitfalls?

We don’t yet know whether we will ever be able to build robots that are perceptually indistinguishable from humans. Is our own intelligence unique, impossible to replicate? Or are we just machines of a different kind? Could everything that we know or do be replaced by a mega computer programmed to drive a sufficiently complicated robot? Interest in these questions goes back to the very earliest days of computing. Turing was the first to pose some of them.

The power of AI is growing exponentially – and there is no finish line. It is exciting, but we must not let our excitement cloud our judgment or cause us to lose sight of the wider issues at hand - some of which pose important legal, ethical and social questions.

Is it fulfilling its promise to increase efficiency and change the world? If so, is it also raising the spectre of mass unemployment as human input is increasingly rendered redundant? Have we outsourced too much decision-making to machines, allowing them to dictate the way we live our lives and the information we consume in ways that may not be entirely healthy?

AI and the bottom line

As a businesswoman, I can understand the potential of AI for companies and entrepreneurs of all kinds. It offers an immense competitive advantage. It can help us build more efficient and productive organisations, and contribute to our bottom line.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

But there’s more than one bottom line. There is the financial bottom line, which can be measured by improvements in productivity. But there’s also a social bottom line – and here, it is not always clear that AI delivers only benefits.

Increasingly, there’s also a third bottom line - the environmental impact of an organisation. What with the tremendous demands AI can make for energy and computing power, here too the jury is still out.

In business the focus will always be the bottom line – but as we march forward, we must ensure all three kinds are considered: financial, social and environmental.

Progress and Pygmalion

As machines have become more capable, the nature of the Turing test has also changed. In 1950 the question was: ‘Can a computer convince a human being that it is not a computer but a real person?’ Now, the question is often reversed: ‘Can a human being convince a computer that he or she is a real person, and not a computer?’ I have lost count of the number of times I have been asked to affirm ‘I am not a robot’.

If Turing were alive today, he would inevitably be astonished at the way his brainchild has developed. Perhaps he would be proud. But he might also sound a note of caution. He might recall the classic story of Pygmalion, the misogynistic sculptor who fell in love with the statue he had carved that turned into a woman when he kissed it.

In the original narrative, Pygmalion and his statue apparently lived happily together - but in the early 20th century George Bernard Shaw updated the story. In Shaw’s Pygmalion, the statue becomes a Cockney flower girl – Eliza Doolittle - and the sculptor a privileged professor of phonetics, Henry Higgins, who sets out to make a new woman of her by teaching her to Speak Proper. He succeeds - only to discover that his creation has assumed a life of her own, and is far from the biddable creature he imagined. Shaw’s Higgins learns a valuable lesson.

Perhaps, in embracing the Artificial Intelligence we have moulded, we run the risk of emulating Higgins’ mistake. And perhaps one moral of the story is this: the more we seek to develop machines that can think like us, the more we learn about ourselves – and, possibly, the more we may wish to reign them back in.