In this blog Mandy Chessell CBE FREng FBCS, Distinguished Engineer and Master Inventor at IBM, talks about a new joint study between CDEI and BCS to develop practical guidance on embedding ethical practice in MSc courses on Machine Learning and Data Science.

Equipping future AI practitioners with the right ethical skills

AI is becoming ubiquitous. Around the world there are now over fifty sets of ethical AI principles that have been published and there are new institutes for ethical AI springing up all over the place. That all goes to show society wants to have robust AI safeguards and AI practitioners who know how to behave ethically. Yet, despite all these lofty aspirations, it seems hard to get the ethics of AI right in practice.

The government’s independent advisory body, the Centre for Data Ethics and Innovation has started a study in partnership with BCS, to determine why ethical AI is proving to be so challenging in practice, and how we make sure the next generation of AI practitioners graduating from universities are fully capable of facing these challenges. Having spent many years at IBM looking at these kinds of problems, I’m pleased to be supporting this effort by chairing the project steering board.

The study has just started in earnest, with our kick off meeting at the end of November at the Royal Academy of Engineering, where the project strategy board took a hard look at the scope of the problem and what we need to do for the study to reach valuable conclusions that will be impactful. This blog gives a summary of what we hope to do in the study and how we’ll go about our work, and at the end of the blog I’ve listed members of the board.

Aims

The emphasis for the study is uncovering what practical mechanisms practitioners need to be equipped with in order to thoughtfully embed ethical principles throughout the product lifecycle, from R&D to deployment and across business management processes.

In the first instance, the output of the study is intended to support universities to understand what ethical practice they will need to embed in the new machine learning MSc and data science MSc courses that are being established through government1 funding and, in particular, how industrial placements can be structured to best support the development of ethical practice in ways that can be rigorously assessed.

The problem of ethical AI practice

One reason the problem of ethical AI practice is hard is because it is much more than just understanding ethical principles and it’s more than just learning how to code technical solutions to ethical challenges into systems. Ethical practice includes being able to thoughtfully interact with business managers, engineers, legal departments, marketing departments etc, to help them understand how ethical concerns need to be addressed as a business.

It also requires embedding ethical behaviours within an organisation, including within organisational governance as well as within technical solutions as features of products and services. Another reason this problem is hard is that AI is being applied in real-time business contexts in novel ways that make it difficult to uncover how and why poor decision-making models are automatically generated and how to put in place effective governance to remedy issues as soon as they occur.

Defining AI

To get to grips with the unique ethical issues that AI can cause we need to identify what sort of AI we’re talking about. There are many definitions of AI, many of which are so broad as to be of little use for the purpose of our study. For our study we’ve settled on characterising AI as any automated system that relies on statistically based best judgement learnt from data to make decisions in real time that can have serious consequences for people.

This excludes systems that make decisions based solely on deterministic rules that can be formally analysed (even if in practice that may be virtually impossible), or where decisions are based on statistical analysis that can be scrutinised off-line by people to validate them before they are implemented (which historically has been the case in business analysis systems). Importantly the adopted definition includes machine learning systems that are able to learn latent variables2 from large data sets, that are beyond human scrutiny, and that are used to create decision making models that discriminate between people because of those latent variables.

More precisely then, for our study we will restrict our attention to AI systems that are:

  • automated systems that must process data streams in real-time;
  • that use probabilistic self-learning algorithms to inform decisions that will have significant consequences for people;
  • used where it is difficult to uncover how decisions are derived;
  • used where contestability of decision is not deterministic and
  • ultimately decisions rely on best judgement that requires understanding of the broader context.

We think this characterisation means we will be focused on the kinds of AI systems already being developed and deployed that are most likely to cause significant ethical challenges when they are let loose on people.

Working across business boundaries

As well as a thorough understanding of how AI itself can lead to ethical issues, properly examining ethical concerns requires understanding an organisation’s business practices and how interdisciplinary teams work together across business boundaries.

Figure 1 illustrates overlapping business constraints that can lead to difficult ethical choices3. Looking at the interactions of interdisciplinary teams will require MSc programmes to look at the role of business processes in effective AI system design, development and adoption. For MSc students to really gain an in-depth understanding of these processes they should ideally be exposed to them through industrial placements.