Equipping future AI practitioners with the right ethical skills
AI is becoming ubiquitous. Around the world there are now over fifty sets of ethical AI principles that have been published and there are new institutes for ethical AI springing up all over the place. That all goes to show society wants to have robust AI safeguards and AI practitioners who know how to behave ethically. Yet, despite all these lofty aspirations, it seems hard to get the ethics of AI right in practice.
The government’s independent advisory body, the Centre for Data Ethics and Innovation has started a study in partnership with BCS, to determine why ethical AI is proving to be so challenging in practice, and how we make sure the next generation of AI practitioners graduating from universities are fully capable of facing these challenges. Having spent many years at IBM looking at these kinds of problems, I’m pleased to be supporting this effort by chairing the project steering board.
The study has just started in earnest, with our kick off meeting at the end of November at the Royal Academy of Engineering, where the project strategy board took a hard look at the scope of the problem and what we need to do for the study to reach valuable conclusions that will be impactful. This blog gives a summary of what we hope to do in the study and how we’ll go about our work, and at the end of the blog I’ve listed members of the board.
The emphasis for the study is uncovering what practical mechanisms practitioners need to be equipped with in order to thoughtfully embed ethical principles throughout the product lifecycle, from R&D to deployment and across business management processes.
In the first instance, the output of the study is intended to support universities to understand what ethical practice they will need to embed in the new machine learning MSc and data science MSc courses that are being established through government1 funding and, in particular, how industrial placements can be structured to best support the development of ethical practice in ways that can be rigorously assessed.
The problem of ethical AI practice
One reason the problem of ethical AI practice is hard is because it is much more than just understanding ethical principles and it’s more than just learning how to code technical solutions to ethical challenges into systems. Ethical practice includes being able to thoughtfully interact with business managers, engineers, legal departments, marketing departments etc, to help them understand how ethical concerns need to be addressed as a business.
It also requires embedding ethical behaviours within an organisation, including within organisational governance as well as within technical solutions as features of products and services. Another reason this problem is hard is that AI is being applied in real-time business contexts in novel ways that make it difficult to uncover how and why poor decision-making models are automatically generated and how to put in place effective governance to remedy issues as soon as they occur.
To get to grips with the unique ethical issues that AI can cause we need to identify what sort of AI we’re talking about. There are many definitions of AI, many of which are so broad as to be of little use for the purpose of our study. For our study we’ve settled on characterising AI as any automated system that relies on statistically based best judgement learnt from data to make decisions in real time that can have serious consequences for people.
This excludes systems that make decisions based solely on deterministic rules that can be formally analysed (even if in practice that may be virtually impossible), or where decisions are based on statistical analysis that can be scrutinised off-line by people to validate them before they are implemented (which historically has been the case in business analysis systems). Importantly the adopted definition includes machine learning systems that are able to learn latent variables2 from large data sets, that are beyond human scrutiny, and that are used to create decision making models that discriminate between people because of those latent variables.
More precisely then, for our study we will restrict our attention to AI systems that are:
- automated systems that must process data streams in real-time;
- that use probabilistic self-learning algorithms to inform decisions that will have significant consequences for people;
- used where it is difficult to uncover how decisions are derived;
- used where contestability of decision is not deterministic and
- ultimately decisions rely on best judgement that requires understanding of the broader context.
We think this characterisation means we will be focused on the kinds of AI systems already being developed and deployed that are most likely to cause significant ethical challenges when they are let loose on people.
Working across business boundaries
As well as a thorough understanding of how AI itself can lead to ethical issues, properly examining ethical concerns requires understanding an organisation’s business practices and how interdisciplinary teams work together across business boundaries.
Figure 1 illustrates overlapping business constraints that can lead to difficult ethical choices3. Looking at the interactions of interdisciplinary teams will require MSc programmes to look at the role of business processes in effective AI system design, development and adoption. For MSc students to really gain an in-depth understanding of these processes they should ideally be exposed to them through industrial placements.
Figure 1: overlapping constraints from IBM’s Ethics of Big Data and Analytics report by Mandy Chessell.
Industry funded MSc programmes that are part of the AI Sector Deal will have such industry placements. For those placements to be of real value will require business skills to be integrated into MSc curricula so that students are properly prepared prior to undertake a placement.
Placements should also help students understand the impact of their work in the real world beyond their MSc course. That will require placements to be properly structured to ensure students explicitly see that impact. One of the most important aspects of our study will be looking at how these placements can be structured, implemented and assessed to best develop ethical practice.
The strategy board
I’m pleased that the project strategy board brings together a really diverse range of people with a strong ethical skill set, which is essential for a study looking at ethical practice. On the strategy board are:
- Chair: Mandy Chessell CBE FREng, Distinguished Engineer and Master Inventor at IBM
- Roger Taylor, Chair of CDEI, Chair of Ofqual
- Kriti Sharma member of government’s AI Council, founder AI for Good, VP of AI and Ethics at SAGE, United Nations Young Leader for the Sustainable Development Goals
- Sherif Elsayed-Ali, Director, AI for Climate at Element AI
- Anthony Woolcock, Lead Data Scientist at the Energy Systems Catapult
- Emma Uprichard, University of Warwick, Fellow of Alan Turing Institute
- Caryn Tan, Digital Strategy Manager, Responsible AI, Accenture Applied Intelligence
- Helen Mayhew at QuantumBlack
- Roger Burkhardt at QuantumBlack
- Bill Mitchell OBE, BCS Policy Director - coordinator for study
- Tim Cook, as observer for Office for AI
That covers what we hope to achieve through this study and who’s on our project strategy board. So, what’s next? None of our work will make much difference unless it has the backing of employers, government and universities. Our next step will be reaching out to key stakeholders through roundtable consultations to gather good practice and synthesise recommendations and guidance that will have the biggest impact at a national scale.
From there, we will extend out consultations to the wider AI academic and business community to get their input so that our final report will include a comprehensive and well-tested set of conclusions and practical guidance that we can be sure will work and be widely supported.
- Gov.uk press release: Government backs next generation of scientists to transform healthcare and tackle climate change
- Which means variables that are not manually specified by designers and are not explicitly present in the model, nor present as inputs or outputs of the model.
- TCG study report: Ethics for big data and analytics