As AI’s adoption grows, BCS’ February 2023 Policy Jam questions the implications for jobs, ethics, professionalism and regulation. Claire Penketh reports.
The latest AI sensation, ChatGPT, has gripped the public imagination. ChatGPT's ability to respond to questions in a seemingly conversational way and produce human-like responses is an apparent game-changer.
ChatGPT was released for public testing in November 2022 by OpenAI. The firm is also behind DALL-E.2, an AI that produces generative art, and Whisper, an open-sourced neural net which focuses on English speech recognition.
By January 2023, over 100 million people were using ChatGPT. Not to be outdone, Google is piloting Bard, its rival to ChatGPT.
Essays, emails, articles, love letters, poems, job applications, debugging code – you name it, and someone, somewhere, is harnessing these new AI tools. The only major drawback seems that they aren’t always factually accurate – yet.
AI is everywhere
AI affects all aspects of our lives. Algorithms in our social media and music streaming apps give us more of what we like; in healthcare, AI is used as a diagnostic tool. In the workplace, it's becoming more popular in recruitment and in streamlining administrative tasks.
On the BCS Policy Jam Panel to discuss this were:
- Sarah Burnett, FBCS, Intelligent Automation Industry analyst and author
- Andrew Pakes, Deputy General Secretary at the trade union, Prospect
- Sam Sharps, Executive Director of Policy for the Tony Blair Institute for Global Change
- Hema Purohit, FBCS - EMEA CTO, Healthcare and Public Sector Microsoft
BCS CEO Rashik Parmar FBCS was present, and Dan Aldridge, BCS Head of Policy, hosted the webinar.
Guiding the role of AI
Rashik set the scene about what BCS, as the professional body for IT, can do: ‘We have to provide careful guidance about how we use these technologies.
‘These are exciting times. But, as these major changes happen, we must consider how we offer ethical, appropriate, and structured advice. The impact of generative technology is affecting many spaces. This allows us to address some of the systemic challenges in the workplace.
‘Trust is important. So, workers' professionalism and competence are essential. What does competence look like? How do we understand competence as this technology moves forward?
‘The second part is around the ethical use of AI itself. Just because you can use AI doesn't mean you should. How do you select where to apply this technology and ensure its correct usage?
‘The third priority is recognising that a wide range of AI technology will significantly impact society. Risk is associated with it, including to human life from some of these technologies.’
Rashik highlighted the example of autonomous cars, where technological errors could have fatal consequences: ‘As professionals, how do we build systems in the right way that serve humanity and address the needs of society?’ he asked.
Managing AI risks
Hema Purohit FBCS is a CTO whose remit is mainly in healthcare, looking at strategy, innovation, and delivering business outcomes. She said: ‘The emerging technology areas are hugely exciting; there's a lot of innovation and creative talent. We need to be careful, making measured, informed decisions regarding how we progress.
‘Make sure you've tested it. Make sure you've validated it right, and only when you trust it do you move on to the next scenario.
‘The impact of getting it wrong is that you could take a life in, for instance, health care.’
One of the challenges she highlights is inbuilt bias: ‘Machine learning pushes the human to the back. And I mean that in the nicest way. We don't see the teams of people behind these technologies. It's not another machine that creates AI or machine learning, it's individuals.
‘The bias can start from the design point. Is there a representative group of people creating the AI? Is the testing correct and has it been validated? An iterative cycle of continuous testing should be in place.
‘We need to ensure the right feedback is going in, listened to and acted upon.’
Sarah Burnet FBCS has published a BCS produced book called The Autonomous Enterprise – Powered by AI. The book explores the potential consequences of automation and, examines how the world of work is changing. Repetitive, transactional processes and basic decision-making are becoming more automated.
She argues people will still run the enterprises of the future, but the style and the nature of their work will change and be augmented by AI.
Regarding the bigger picture around the rapid introduction of AI, Sarah said there could be unexpected consequences: ‘We've seen what can happen when we get caught unaware of the potential harm that could come from things that aren't managed correctly.’
‘Take the genie of data-sharing over the internet and apps. That was out of the bottle before we knew companies were using our data. We shouldn't let that happen in the field of AI.
‘But the problem is that we are at this early learning stage of AI and won't necessarily know what will happen. That's why it's important to implement good ethical practice at this early stage.’
The role of government
As the technology surges ahead, Sam Sharps, speaking as a representative of a policy think tank, asked whether government ministers understood AI’s potentially transformational powers: 'Broadly speaking,' he said, ‘the tech and the policy world have broken down to a degree. There are some positive signs in the recent restructuring of the government's machinery and departments, but the problem remains.
‘I'm not one of these people who lays all the blame at the politician's door. But I go to conferences and hear people say, 'well, they just don't understand what they're talking about'. There are many examples of slightly foolish things that politicians and ministers have said over the years. There's a responsibility on both sides to find a common language and understanding.’
He said sometimes the appetite to look at the longer-term implications fades following the initial fun of playing with the shiny new tech toy. Take ChatGPT, for example. He said: ‘It's a fascinating example of the hype-cycle moving faster and faster. There was a lot of anticipation and giddy excitement around its launch. Everyone has been having a bit of fun with it. But almost within weeks, people were saying, 'well, it's not quite that great, is it?'‘
Be part of something bigger, join the Chartered Institute for IT.
Beyond ChatGPT’s short-term, Sam said there are medium-term questions to consider, such as the thorny issue of regulation: ‘There's a knee-jerk assumption that regulation is kind of bad, and government should get out of the way and let people innovate.
‘In reality, there are plenty of people working in this space who would quite welcome regulation. It sets some standards, it allows them to invest and gives them some certainty about where they can operate or get insurance.
‘It can lead to a sensible conversation around guidelines about how to prevent abuse and what steps to take to bring products onto the market.
‘The long-term impact could also consider how it could enhance, for instance, public sector services and how jobs are affected.’
Will robots steal our jobs?
The issue of how jobs will be affected by AI was a dynamic topic during the webinar, with a relatively high number of the ninety-plus viewers offering questions on the topic.
Andrew Pakes is one of the deputy General Secretaries of the Prospect Union. It has around 150,000 members consisting mainly of engineers, plus tech and creative workers.
Prospect Union, he said, has a unique perspective as it represents workers who are the designers and creators of AI as well as those, in engineering and other professions, who apply the new technology.
He was clear that two considerations had to be in place: trust and change management. However, he said that the UK didn't have a good record on either when it came to industrial relations: ‘If you look at this historically, the UK has been really bad at managing economic and technological change. We've not been very good at transitions.’
He said fundamental issues, such as accountability regarding risks and harms, had to be discussed along with consultations around procuring new AI systems and its impact on workers.
He said: ’There's a great phrase, often used in the disability movement: 'nothing about us, without us.'
‘The people who use technology should be involved in its application and development. That's a wake-up call for British capitalism rather than just the technology itself.’
The BCS Policy Jam will be back in mid-March looking at Net-Zero. Sign up here.