An exclusive BCS CIO Network event saw attendees explore the importance of ethics, professionalism and trust within IT. The meeting took place against the backdrop of the Post Office Horizon IT scandal and AI’s ever-quickening pace of development.

The latest CIO Network event saw tech leaders across industries explore the importance of ethics, professionalism and trust in IT.

The meeting took place against the backdrop of the ongoing Post Office Horizon IT Scandal and on the day the latest version of OpenAI’s ChatGPT was released. The system, called ChatGPT-4.0, can now understand and better process speech, images and videos. 

The generative AI’s ever growing capabilities are a vivid sign of the AI industry’s galloping technical progress. Of course, the flawed Horizon IT system is an equally indelible reminder of the terrible human consequences of critical software going wrong.

The CIO Network meeting sat very much at the point where these two contexts overlap. Fittingly, it explored the importance of:

  • Professionalism: ensuring that the people who design and engineer the technologies upon which the public relies are suitably skilled, knowledgeable, and up-to-date
  • Professional registration: the meeting explored under what circumstances a Chartered IT Professional should oversee highly critical IT projects. CITP is a public register of professionals who have achieved the highest standard of technical competence and professionalism
  • Ethics: it is essential to equip IT professionals with the skills and knowledge to judge right and wrong and to back those workers up with whistleblowing channels
  • Trust: if IT projects fail and people are harmed, the public may withdraw its trust in the IT industry

The group offered the following key insights:

  • How people and, ultimately, organisations behave when confronted with ethical challenges often begins with culture. Does the business encourage people to speak up, and what are the consequences of making mistakes — even when it comes to ethical decisions?
  • Many businesses lack the processes needed to manage ethical challenges correctly and in an agile manner. This leads to a need for further planning: preparing for what happens when ethical challenges become a business and reputational reality
  • How managers are incentivised and ask whether those metrics have an embedded ethical component must be considered
  • With some effort and access to free platforms, almost anybody can learn to code and potentially get a software job. But, in high stakes scenarios like health and social care, we should be wary of enthusiastic amateurs
  • We must reconsider education and how we can empower everyone — not just young people — to flourish in an AI world
  • Organisations should be required to publish ethical policies on using AI and other high stakes technologies
  • There is a critical need to build whistleblowing channels independent of and external to businesses

Summing up the CIO Network, it was felt that the active presence of ethics, professionalism, professional standards and legislation is key to ensuring that everybody in society benefits from AI. To make IT good for society, we need all four pillars working together.

BCS Report: Living with AI and emerging technologies

Professor Bernd Stahl FBCS was interviewed by Claire Penketh FRSA MBCS, BCS’ Senior Policy and Public Affairs Manager. They discussed AI ethics with many ideas being drawn from BCS’ AI ethics report.

Claire-Penketh-chats-with-Professor-Bernd-Stahl
Claire Penketh chats with Professor Bernd Stahl

Speaking at the CIO Network event and turning to the need for IT professionals working on high stakes AI projects to be registered, Bernd said, ‘Everybody can take an aspirin and go to the pharmacy and buy some over the counter drugs. But if you want open heart surgery, you must have a high level professional, and I think the same should be true for IT. Everybody can become a programmer. But if you go into AI development, that could change the nature of society. If that happens, you should have a higher level of qualification. We should be thinking about regulation.’ We should, in short, be wary of enthusiastic amateurs, particularly when working in high stakes scenarios.

On regulation leading to red tape, Bernd said: ‘I think we can probably all agree that we don't want bureaucracy. It doesn't make anybody happy, but at the same time, bureaucracy can be significant and beneficial. We're all happy that we have fire alarms, and even though we are annoyed when they go off, we know that if there's a fire, we’ll be safe. I think the same principle needs to be applied to IT. Where necessary, and insofar as possible and reasonable, we must consider what steps can be taken and which models can be applied… How we manage the process.’

Professor Bernd Stahl offered other recommendations and insights:

  • Despite its seemingly endless popularity and the surrounding hype, we struggle to provide a single, coherent, and agreed upon definition of AI
  • The public is aware of and concerned about AI’s potential negatives
  • From central government to companies and professionals, there is a need to raise awareness of AI ethics and best practices, and to understand them. Education, standardisation, and regulation all have a part to play
  • There is a strong case for professional registration among practitioners working in fields with a potentially high risk of consequences for the general public. The key is to consider how we motivate practitioners to participate — this may be through legislation
  • Having an independent whistleblower’s office is critically important. For trust to exist, there needs to be accountability for people who betray their responsibilities to the public. There needs to be ‘a line from doing something to consequences’. However, we must be careful to ensure this doesn’t become a means of scapegoating professionals who have made an honest mistake

Background on ethics and AI

The report that Claire and Bernd discussed is called Living with AI and emerging technologies: meeting ethical challenges through professional standards.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

It was produced ahead of the AI Safety Summit which was held at Bletchley Park in the summer of 2023. The report was prepared by Professor Bernd Stahl FBCS and Gillian Arnold FBCS from BCS’ Ethics specialist group with support from BCS’ Fellows Technical Advisory Group (F-TAG).

To create the report, BCS’ Ethics specialist group surveyed IT professionals to identify the AI and ethical challenges that practitioners face.

The findings show that AI ethics is a topic that BCS members see as a priority, that many have encountered personally, and is problematic in many ways.

Key findings included:

  • 88% of participants believe it is essential that the UK government takes a lead in shaping global ethical standards in AI and other high stakes technologies
  • 82% think that UK organisations should be required to publish ethical policies on their use of AI and other high-stakes technologies
  • 19% of those questioned have faced an ethical challenge in their work over the past year

Panel discussion

Julie Bailey MBCS, Director of Corporate Engagement at BCS, chaired a panel discussion that explored the meeting’s core concerns: ethics, professionalism and trust in IT. Speakers included:

  • Dai David FBCS, Group CTO, Arup
  • Simone Steel FBCS
  • Somayeh Aghnia FBCS, Co-Founder and Chair, Geeks Ltd

The-CIO-Network-panel-discussion

The CIO Network panel discussion

The panel felt that CIOs often play a central part in defining, calibrating and maintaining an organisation’s ethical compass. In many ways, this feels like a natural part of the CIO role as it sits between the technology industry and the business they serve. A CIO needs to know about technology’s latest trends, understand how their business works, and negotiate pragmatically between the two. Often, the CIO is seen as a consultant who can speak about what’s technically possible and what’s strategically necessary from a balance sheet perspective. 

This consultative process around ethics often begins with conversations about risk and mitigation —‘if we do this, what will happen?’ is the cornerstone of such meetings. Usually, these discussions default to calling in the legal team and asking whether what the organisation is thinking about would break any laws. These questions are undoubtedly necessary — poor business decisions often make for the best newspaper headlines.

When deciding right from wrong however, actions can be legal but unethical. For example, by looking at data, a financial organisation might quickly be able to spot customers who are in financial distress and, consequently, more likely to default on a loan. The question is: should the organisation, at that point, stop lending and remove credit?

Far from being just a thought experiment, this real world example shows how ethical decisions are often far from being clear-cut and black and white. Halting lending would be legal, and it may well protect the lender from losses. But what if the borrower isn’t behaving irresponsibly and is, instead, suffering from mental health issues? What would be the real-life consequences of withdrawing lending support?

Laws keep you out of jail but don’t always help people make the right decisions when faced with the most challenging decisions. Ensuring people and organisations make the best ethical decisions often starts with culture.

It’s hard to achieve, but businesses need to create a culture where people can speak up and challenge technology based decisions that could lead to harm and hurt. It’s also worth considering this question: what are the consequences of making a mistake — particularly one based on ethics? Will accountability lead to team wide learning or to scapegoating?

Like laws, ethics also vary from country to country. This makes deciding right from wrong even more challenging for international businesses.

The CIO Network panel of speakers felt that AI makes these legal and ethical issues even more pressing and urgent because the technology is such a force multiplier. AIs can work so quickly and at such a scale that they can amplify bias and dysfunction.

Creating an ethical framework

Given the criticality of ensuring ethical correctness, many organisations strive to create an ethical framework. The panel felt this was an excellent idea — a framework is a good way to establish a standard baseline that can be shared and understood across an organisation.

Ethical build of AI

Learn more about this exclusive learning programme which will provide the tools and knowledge to design and build AI ethically, protecting your customers as well as your reputation.

But frameworks aren’t without their faults and challenges. Firstly, there are many different frameworks, with many organisations making their own independently or adopting one of the many different standards available.

The group felt that this process of building new, or adopting and adapting the existing, was somewhat inefficient. If organisations talked to each other, they could share experiences, avoid duplication and avoid wasting resources.

The existence of other organisations led the panel to talk about competitors, which led to a potential point where frameworks might start to falter. What happens if your competitor is paying lip service or disregarding ethics and, in so doing, stealing market share and boosting their bottom line? This led to a critical observation: organisations must consider how they incentivise managers. Do they get a bonus for beating financial targets and delivery deadlines? Are they, by implication, being measured using metrics that themselves have no ethical component?

Moving on, the panel explored what it means to be a director and how their responsibilities might need to be updated in light of AI’s rise. It was observed that being declared bankrupt can bar people from being company directors.

Education

Education throughout one's lifetime is essential to cementing good ethical practices within the IT industry. The panel felt that there is a strong case for beginning this in the earliest years of school-based education, focusing on exploring ‘what are the do’s and don’ts of living in this digital world?’

Teaching children to code is very common today, and it was felt that an introduction to ethics should accompany this. If this happens, ethical thinking and questioning skills will become a natural part of a career.

The panel also discussed the importance of reverse mentoring. Often, young people arrive in an organisation bursting with technical understanding and ambition but need to gain more experience in business, context, and consequence. These factors take age and time to develop.

Reverse mentoring can provide space for leadership to learn new technology from early career professionals. In return, recruits can benefit from their seasoned colleagues' experiences about technology’s broader impacts. 

The panel felt that education must do more than teach people how to use computers. It must equip them to be good digital citizens who can temper technical skills with critical thinking and ethical knowledge.

What is the CIO Network?

Vibrant, animated and practical, the CIO Network aims to provide leaders with a space to form ideas and ultimately shape tomorrow’s technology agenda.

Want to connect with like-minded CIOs and discuss your challenges and triumphs? Find out how you can join the next event by getting in touch with our team today. Please note the CIO Network is non-commercial.

Taking things further

BCS’ Ethical Build of AI certification is an exclusive learning programme that supports digital professionals in developing AI technologies responsibly and navigating ethical concerns from security to human rights.

BCS March Policy Jam posed the following question: how can ethics and professionalism address the risks and challenges of emerging technologies?