The road ahead: navigating AI's pitfalls

April 2018

Car on mountain roadChris Yapp FBCS reflects on the uneven history of artificial intelligence, and looks ahead with ever-hopeful eyes.

Back in the early 1980s, a leading international authority on artificial intelligence (AI) came to my workplace and gave a lunchtime lecture in which he declared:

‘Within five years there will be an expert system in the boardroom of every FTSE 100 company.’ It didn’t happen.

It was early proof for me that while progress in the technology field can feel rapid, its deployment and utility can be difficult for potential users to grasp. It can be difficult to explain the benefits and create the critical mass of skills needed to realise those benefits.

Yet, at the same time, I can look back at the 1980s and be amazed at how far we have come since that time across the IT sector.

Amara’s law, named after the late Roy Amara, can be expressed as: ‘The history of technology is the overestimation of the short-term and the underestimation of the long-term consequences.’

This is important because the industry often, in my experience, fails to deliver on its full potential because it fails to grasp the meaning of this ‘law’ and lacks an understanding of the history.

Two examples, in other areas illustrate my point. In 2012, at a conference I was attending, a speaker was introduced as a pioneer in the ‘internet of things’ (IoT). He had graduated around 2010. Yet, I can remember discussions and demonstrations in the 1990s around IPv6, Universal PnP, wearable computing and other fields which have now been subsumed within the IoT sphere.

Similarly, a speaker at a conference more recently dated the start of social media to 2005. Hold on! The WELL (Whole Earth ‘Lectronic Link) dates back to the 1980s, Howard Rheingold was talking about virtual communities in the early 1990s, the first online communities in California were in the late 1960s, before the modem!

The danger of overpromising

Now, for me, the recent advances in artificial intelligence and machine learning are very impressive. Unfortunately, there is still too much hype and overpromising of what can be achieved, which is why detailed examples are so important in realistically grounding the art of the possible, both in the short-term and medium-term future.

Inevitably, as the professional body for IT, BCS are concerned about the developments in hardware and software that underpin the potential of AI and ML heading into the future.

It is, for me, important to see that, with technology becoming more pervasive, we are increasingly aware of the societal challenges that emerge from IT developments. Legal challenges, such as GDPR and IP rules, are central to realising the economic and social benefits of AI and ML. The privacy and security impacts are well established within our discipline.

More broadly than that, the ethical issues of autonomous machines, such as self-driving cars, require IT professionals to move beyond the technology domain. If an autonomous vehicle can only avoid a child in the road by putting the passenger’s life at risk, what should it do?

If AI-driven algorithms discriminate on grounds of gender, for instance, because the data suggests that a woman will pay a higher price than a man for the same goods, the reputational risk for any company deploying such algorithms could be fatal.

It’s clear that, if the UK is to maintain and develop a leadership position in AI and ML, investment in technology researchers is urgent. If we are to be leaders in deploying AI and ML, to tackle domains with real benefits to society and the economy, it is important that we take a broader perspective on the skills needed. I have mentioned the legal and ethical skill sets already.

Big data and data science

In the field of ‘big data’, I have seen too many projects fail to thrive because of a lack of ‘data scientists.’ I’m still not clear that we have a coherent and robust definition of the role that is evolving in the light of AI and ML developments. For instance, my experience has been that domain relevant data science skills are the real challenge.

In health apps for instance, understanding health data does require medical insight, not just technical understanding of data. Putting together multi and interdisciplinary teams has been a challenge over my whole career.

We have a responsibility to reach out to the professions, such as law and medicine, to ensure that the future professionals, in their domains, have the underpinnings within their initial and CPD qualifications to exploit the potential we create. It is important that they help us shape the technology developments rather than become passive users of what we create.

For me, however, the most significant challenge is going to be energy security. In 2018, bitcoin, it is argued, will use as much electricity as Argentina.

When I read articles about AI in 2030, what concerns me is a lack of understanding of the power requirements to scale the applications to their promise.

The industry has an honourable track record in reducing its carbon footprint through progress in the technologies themselves. Yet, at the same time, the increasing deployment of technologies increases our share of the world’s electricity consumption.

Try this as a thought experiment: If we could build a single artificial general intelligence (AGI) to match a single human brain, to the best of our current understanding of the human brain, with the most advanced components we have, how many gigawatts would it require to run it? When AlphaGo beat a champion at Go, how much energy did each require?

Powering the future

While our dependence on non-renewable sources of energy is diminishing over time, can we make better progress in reducing our energy needs? What do you think we need to do to fully deliver on the potential? Would it be 50 per cent of our current unit energy requirements, 10 per cent or one per cent? I simply don’t know.

None of this should be thought of as pessimism. It is in tackling these wicked problems that the UK can create the leading-edge skills not just in the building of AI systems, but also in deploying them to benefit society and business.

My experience is that change can be a long time coming, but when it comes it can be more rapid than anyone can imagine.

I’ve wanted to talk to HAL, the computer in 2001: A Space Odyssey since I saw the film in 1969. It’s been a long time coming, though there have been times when I’ve seen demonstrations that made it look closer than it turned out to be.

After false dawns and disappointments, it does feel that this time it may be different. The articles of this magazine demonstrate real progress at the technology level, but also a wider engagement with the real-world challenges. As an industry we have come a long way in 60 years.

So, here’s the challenge for everyone working in, or associated with, AI and ML.What are the big challenges we need to address now, to deliver for the BCS 70th anniversary?

What alliances do we need to make with ethicists, philosophers, lawyers, psychologists, politicians, universities, and many others to build a highly-skilled and rounded workforce in both the supply and usage of AI-based systems?

Let’s make it happen.