Theo Knott, Policy Programmes Manager at BCS, outlines how AI is being used in UK policing and a recent roundtable event in Parliament discussing the issue.

In few areas are the ethical implications of AI so vast and the potential controversies so inflammatory as in policing. Equality under the law is one of the bedrocks on which our liberal democracy is built and while not always perfectly adhered to, few would disagree that great care needs to be taken around any decisions that could impinge it. As a result, the introduction of AI into policing is a particularly worrisome prospect for many. While its potential to transform has been compared to finger print analysis and DNA, if its rollout is handled poorly, it could entrench and expand the biases and inaccuracies that it should be attempting to stop.

The All-Party Parliamentary Group (APPG) on Data Analytics is running an enquiry on technology and data ethics to look at these questions in policing and other contentious areas and provide recommendations to change policy for the better. With BCS championing the importance of ethics in AI and technology in general, we were invited to participate in the roundtable discussion in Westminster.

Before delving into the details of the event itself, it is worth looking at how AI technologies currently are used in UK policing. While Kent and London Metropolitan constabularies have conducted a variety of trials, the best-know UK use is Durham Constabulary’s Harm Assessment Risk Tool (HART), a predictive algorithm which evaluates the risk of reoffending for certain criminals. Those deemed to be at a lower risk of reoffending are able to take part in the constabulary’s Checkpoint programme, which allows them to be dealt with without going to court. It’s worth noting at this point that HART and all other UK schemes are only supposed to be used as an aid to human decision making, rather than decision makers in their own right.

Whether these programmes have been a success or failure is debatable and there hasn’t yet been enough trials to make a fully formed evaluation. However, On the balance of evidence in the case of HART, its accuracy has been very similar to that of a human. It’s not much of a leap to assume that with more time and data, this will improve.

The issue for detractors of these systems is not necessarily the raw percentages of success or failure, but whether decisions are being made in an ethical way. While you can question a human officer and understand the process towards an error, this becomes more difficult with AI and if algorithms are developed with data that is biased in the first place. When this is the case, then whatever is produced is likely to carry the same biases. This is before you get to the problem of the people creating algorithms that share biases they themselves might have. 

It was these kinds of issues that the roundtable looked to tackle. With attendees from academia, industry and politics, it didn’t take long until there was profound disagreement on almost everything of note; with views ranging from a wish to scrap every use of AI in policing to those arguing that we needed to rollout such technology at the breadth and speed seen in various parts of the United States. Then there was the middle ground; a need to create robust frameworks and clear guidelines, bastioned by a regulator with clear powers.

While this is hardly the most scintillating of arguments, it’s undoubtedly one of the most salient. Currently, police constabularies have trailed AI on a piecemeal basis, predominantly due to having reformist chief and a willingness to spend money initially to hopefully improve standards and drive efficiencies. While there are laws in this area, such as the Regulation of Investigatory Powers Act (RIPA), they were not set up to deal specifically with AI technologies and there is no central authority to arbitrate on cases and potentially trial technologies in a more coherent manner.

Another area of broad agreement was the desire, at this point, for human involvement to remain central to the decision-making process. Although the technology is readily available to have algorithms make decisions unilaterally, until more effective regulation is in place and the merits and limitations of AI technology better understood in a policing context, taking a cautious approach seems a sensible option. Tied into this is the need to improve education in AI ethics and bias for those who will be writing and maintaining these sorts of policing tools to try to reduce the chance of algorithms being biased from the outset.

Ultimately, the utility of roundtable events like this will be measured by their tangible effect on policy. The fact that there is a cohort of MP’s that are engaged deeply with AI and ethics and are willing to navigate the thorny issues it inevitably raises is encouraging and it shouldn’t be forgotten that the potential to save money and improve standards is at the core of the debate around policing and AI. However, as we have seen in recent years with personal data, there is a need to try and introduce ethical standards proactively, rather than desperately trying to retrofit them once the technology is already creeping into everyday life. Hopefully, through conversations like this, an appreciation of where we have gone wrong before and a willingness to look at difficult ethical questions, we can ensure AI is the positive force for policing that it can be.