The professional body for IT has welcomed the government’s proposed innovation-friendly and flexible approach to regulating AI, but says more factors need consideration to ensure the proposals maximise the public benefit of AI.
The government’s regulatory regime ‘Establishing a pro-innovation approach to regulating AI’ outlines how it intends to keep pace with and respond to new and distinct challenges and opportunities posed by AI.
BCS, The Chartered Institute for IT agrees that a light-touch, risk and context-based approach is sensible given that AI is still a set of emerging and rapidly evolving technologies. The professional body for IT supports the proposals to extend the remit of existing regulators to deal with AI based on its use and likely impact focus on addressing issues where there is clear evidence of real risk or missed opportunities.
BCS also supports the proposal to use cross-sectoral principles tailored to the distinct characteristics of AI which should prove effective as a basis for future regulation. However, the Institute says there are gaps that need to be addressed.
Adam Leon Smith, Chair - BCS Software Testing Specialist Group said: “The UK has proposed a more flexible approach to regulating AI than the EU which is welcomed. In the detail though, it is important that the UK uses international standards and encourages the EU to adopt the same standards. With flexibility in the "what" and convergence on the "how" the UK can not only maintain its export capability, but aim to be a world leader in AI conformity assessment and assurance.”
BCS welcomes regulatory proposals but with caveats
The Institute broadly welcomes the regulatory proposals but with caveats. While the light touch approach is positive in that it enables innovation, there are areas that need more consideration to ensure the proposals maximise the public benefit of AI. The proposed cross sectorial principles are appropriate and useful, but should be extended further.
For example, AI systems must have appropriate safeguards to ensure they remain technically sound and are used ethically under reasonably foreseeable exceptional circumstances, as well as under normal circumstances. Organisations must show they have properly explored and mitigated against reasonably foreseeable unintended consequences of AI systems.
AI systems should be standards compliant to enable effective use of digital analysis or auditing tools and techniques. Auditable data on AI systems should be generated in a standardised way that can be readily digitally processed and assimilated by regulators. AI technologies often operate globally and need to comply with the legal requirements in other countries. Therefore, organisational governance must be capable of dealing with complex software supply chains that are distributed across different legal jurisdictions.
Be part of something bigger, join the Chartered Institute for IT.
Steve Sands, Chair - BCS Information Security Specialist Group (ISSG) added: “At its core, BCS stands for ‘Making IT good for society’. The evolution of AI is a great example of an emerging technology which will touch everybody’s life in ways we have yet to understand. The BCS supports innovation in all forms and will always take an interest in ensuring there are appropriate checks and balances to prevent unethical or malicious use and protect the fundamental rights and freedoms of citizens.
To function effectively, AI schemes will often require access to a large quantity of data. As with existing data-driven solutions, this creates information security risks. Organisations embarking on AI programmes need to ensure they work within the proposed ‘light touch’ regulation when developing AI, whilst also complying with existing privacy rules where there is an intersection with use of personal data.”
Freedom and autonomy to innovate responsibly
BCS says the proposals need to discuss how regulators will support organisations to develop governance that enhances their freedom and autonomy to innovate responsibly. The proposals must also ensure there will be minimal divergence between the UK approach and that currently being developed by the EU, to enable UK companies to compete more easily in European markets.
Discussion on how regulators, such as the Information Commissioners Office and Ofcom, will be able to handle the increased workload caused by the extensions in their remit must also take place. This could be significant given the frequency of changes that AI systems undergo, but also the possible impact of the Online Safety Bill on Ofcom’s capacity. Data quality must also be given more consideration.
BCS also says that how regulatory overlap will be managed should be explained ie: when an AI system falls within the remit of multiple regulators, each with different interpretations of the cross-sectorial principles. For example, fairness may be interpreted differently due to different contexts. Consideration of how to carefully phase in new AI regulations, given the change management challenges organisations will face in preparing for AI regulatory compliance must also be taken into account.
Transparency and appropriate checks and balances must be ensured to address legitimate concerns over fundamental rights and freedoms that may occur if AI regulation is subject to legislative exceptions and exemptions. Consideration must be given over how regulation can foster the development of good professional practice for the design, development, and use of AI systems.