The UK government has released details of how Artificial Intelligence will be regulated and has rejected the EU’s approach saying it is taking a less centralised approach.

The BCS Director of Policy, Dr Bill Mitchell OBE has produced a briefing and sought expert analysis from the chair of the Software Testing Specialist Group, Adam Leon Smith, CITP, FBCS.

The UK’s approach will be to leave AI regulation up to existing agencies that focus on sectors like broadcasting, financial services, healthcare, and human rights.

In its policy paper ‘Establishing a pro-innovation approach to regulating AI proposals’, published by the Department for Digital, Culture, Media & Sport, the government said: “AI is a rapidly evolving technology with scope of application and depth of capability expanding at pace. Therefore, we do not think the government should establish rigid, inflexible requirements right now. Instead, our framework will ensure that regulators are responsive in protecting the public, by focusing on the specific context in which AI is being used, and taking a proportionate, risk-based response.”

Existing regulators

Adam Leon Smith, CITP, FBCS said: “The new UK principles imply a similar technical direction to the EU AI Act, but there are significant differences in the UK implementation approach.

“In terms of the regulatory ecosystem, it is notable that the UK is planning to extend the remit of existing regulators such as the Medicines and Healthcare products Regulatory Agency (MHRA) rather than create new AI notified bodies. This may turn out to be similar in practice, as existing EU medical notified bodies may simply decide to extend their remit by registering as an AI notified body.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

He added: “Whilst the UK approach makes sense, especially in the case of financial services and medical devices which have a mature regulatory ecosystem, it is unclear if the Information Commissioners Office and Ofcom will be able to handle the increased workload. This workload is particularly important given the frequency of change that AI systems undergo, but also the expected impact of the Online Safety Bill on Ofcom.

“The EU is adopting a risk-based approach. It is specifically prohibiting certain types of AI and requiring high-risk use cases to be subject to independent conformity assessment. The UK is also following a context-specific and risk-based approach but is not trying to define that approach in primary legislation, instead, it is leaving that to individual regulators. One outcome of this maybe that less regulated sectors, such as recruitment, fall outside of the UK's proposals.

“The EU is defining AI based on the technology used. Instead, the UK has defined core characteristics of AI in relation to autonomy and adaptability. It is likely that both approaches will end up encompassing the same technologies.

Explainability

“Another notable difference is the focus on explainability in the UK proposal. The policy paper states that regulators may deem that high-risk decisions that cannot be explained should be prohibited entirely. The EU has not gone so far, merely indicating that information about the operation of the systems should be available to users. Interestingly the UK proposal also requires that accountability for AI systems must rest with an identified legal person - a bit like a Data Protection Officer - although this can be a company rather than a natural person.”

The government undertook a public consultation on its proposals and plans to publish its White Paper in late 2022.