BCS Chief Executive Sharron Gunn has highlighted the importance of establishing a professional framework for AI assurance
What is AI assurance?
AI assurance refers to the processes and professional practices used to assess whether an AI system is safe, trustworthy and working as intended throughout its lifecycle. This includes looking at how an AI system is designed, developed and deployed, as well as how it is governed and how risks are identified, managed and monitored over time.
In practice, organisations may seek independent AI assurance to give confidence that their systems meet expected technical, ethical and governance standards.
What’s next for AI assurance?
The Roadmap set out plans to create a Consortium of organisations working across the quality and standards infrastructure, which will be tasked with creating a Code of Ethics and a framework, which will look towards ultimately creating certifications and standards.
Why is AI assurance important?
AI assurance helps to reduce risk, prevent harm and build trust in AI systems. As AI is increasingly used in sensitive and high-impact contexts, organisations need confidence that systems are reliable, explainable and properly governed.
BCS’s role in AI assurance
BCS was referenced strongly in DSIT’s “Roadmap to Trusted Third Party AI Assurance” September 2025.
BCS’s work focuses on ensuring that those working in AI assurance have the technical understanding, professional judgement and ethical grounding required to assess AI systems effectively. This supports the development of a skilled and credible workforce capable of enabling responsible AI adoption across the economy.
FAQs
How does AI assurance support trustworthy AI systems?
AI assurance gives organisations and the wider public confidence that AI systems work as intended, are safe and can be trusted. Just as a financial auditor independently verifies a company's accounts, AI assurance professionals independently test and evaluate AI systems against agreed standards, identifying risks and verifying that systems behave reliably and ethically.
As AI becomes more embedded in business, public services and everyday life, assurance is what makes it possible to adopt AI confidently and enables innovation to scale. A thriving AI assurance market supports faster, safer AI adoption, drives economic growth and helps position the UK as a world leader in responsible AI.
Who is responsible for AI assurance within organisations?
There is currently no single regulatory requirement that determines who within an organisation is responsible for AI assurance, and practice varies. In many organisations, responsibility sits across a combination of teams, including risk, technology, legal, compliance and audit, depending on the organisation's size, sector and appetite for AI governance.
As the field matures, we expect to see more organisations designating specific roles for AI risk and assurance, whether through an internal AI risk or audit function or through commissioning independent third-party assurance or a combination of both. The AI Assurance Consortium is working to develop the professional standards and frameworks that will help organisations understand what good looks like and make informed decisions about how to resource it.
What role do professional bodies play in AI assurance?
Professional bodies are essential to building a trusted, high-quality AI assurance market. They set and maintain the standards that define what it means to be a competent, ethical practitioner. They develop the certifications and registrations that give employers and clients confidence in the people they are working with. And they provide the ongoing learning and development that keeps professionals up to date, which is particularly important in a fast-moving field like AI.
In a sector as new and rapidly evolving as AI assurance, professional bodies also play a convening role by bringing together experts from across industry, government, academia and civil society to agree on the skills, competencies and ethical frameworks the profession needs. Unlike many fields AI assurance is important across multiple professions and this convening of professional bodies is critical to allow a high-quality AI assurance market to evolve.
How is the UK approaching AI assurance and governance?
The UK already has a significant and growing AI assurance market, estimated in 2024 to be worth over £1 billion and employing around 12,500 people. The UK's approach to AI governance has to date been largely sector-based, with regulators in individual sectors, such as financial services, taking responsibility for how AI is used in their domains.
Building on these foundations, the government's Roadmap to Trusted Third-Party AI Assurance sets out an ambition to grow and professionalise the UK's AI assurance market. This includes supporting the development of third-party AI assurance: where independent professionals externally verify that AI systems work as intended, as well as internal assurance functions within organisations. The AI Assurance Consortium is a central part of this agenda, tasked with developing the professional infrastructure that will underpin a world-leading AI assurance sector.
What is the AI Assurance Consortium?
The AI Assurance Consortium is a multi-stakeholder body convened by the Department for Science, Innovation and Technology (DSIT). It brings together professional bodies, standards organisations, industry, civil society and independent experts to develop the foundational building blocks of a recognised AI assurance profession in the UK.
The Consortium has three core deliverables for its first year: a voluntary Code of Ethics for AI assurance professionals, a skills and competencies framework, and guidance on information access requirements for assurance providers. Alongside these, the Consortium will examine the data standards that underpin effective AI assurance and how the UK's approach aligns with international frameworks including those of the EU and US. It will also take a longer-term view of the profession itself by considering what it will mean to be an AI assurance professional in the years ahead, how the UK can grow its AI assurance market, and how British expertise in this area can be recognised and valued internationally. Beyond producing outputs, the Consortium will advocate for the AI assurance profession and its adoption, and support the sustainable and innovative growth of AI in the UK.