AI is at the heart of the government’s plan to kickstart an era of economic growth. Martin Cooper MBCS explores the recent report into AI assurance and the AI assurance market and asks what it means for the public and business.
The UK government is investing in AI assurance to ensure artificial intelligence is developed and deployed safely, responsibly and at scale.
Writing the foreword to the government’s research and analysis report, Assuring a Responsible Future for AI, Secretary of State Peter Kyle MP explained: ‘My ambition is to drive adoption of AI, ensuring it is safely and responsibly developed and deployed across Britain, with the benefits shared widely.’
A key part of this vision is AI assurance — the tools to measure and communicate system trustworthiness.
AI assurance, Kyle explained, underpins adoption, boosts confidence and drives growth. He said: ‘AI is at the heart of the government’s plan to kickstart an era of economic growth, transform how we deliver public services, and boost living standards.’
As you read on, we’ll explore the government’s view on AI assurance and the AI assurance market by taking a close look at the report.
Summary of Assuring a Responsible Future for AI report
The report outlines a practical approach to expanding the UK AI assurance market. It outlines actions to support supply, raise demand and harmonise language and standards. The evidence suggests a market that already supports over 500 firms and contributes more than £1 billion in value.
The government's stated aim is to transform that foundation into a larger, coherent industry that provides independent checks, reassures the public and reduces barriers to safe AI adoption. Realising that aim requires investment in skills, quality infrastructure, and mechanisms to share model information safely. It also requires international engagement to align standards and to enable cross-border trade in assurance services.
What is AI assurance?
AI assurance provides tools, tests and governance to show whether an AI system behaves as intended. It covers audits, model evaluations, documentation and governance processes that produce evidence about safety, fairness and compliance.
How does AI assurance relate to technical standards?
Technical standards establish common tests and metrics, making assurance repeatable and comparable. When standards exist, assurance providers can apply the same benchmarks across products and sectors. Standards help reduce uncertainty for buyers and make it easier for providers to demonstrate their capabilities.
The report sets out an active role for the UK in shaping and adopting standards through an AI standards hub. This approach aims to translate high-level principles into operational tests and support participation in international standards bodies, enabling UK suppliers and assurers to trade more easily overseas.
What are the UK government's plans in relation to AI assurance and the UK AI assurance market?
The government focuses on four actions. First, it will develop an AI assurance platform that brings together guidance, tools and an ‘essentials toolkit’ aimed at smaller firms. The platform aims to reduce friction in procurement and make basic assurance activities accessible to start-ups and small suppliers.
Second, it will publish a roadmap for trusted third-party assurance to expand independent supply. The roadmap will identify capability gaps and propose steps to incentivise new providers.
Third, it will coordinate research and provide targeted funding to accelerate the development of methods and tools. Fourth, it will develop a terminology tool to enhance consistency across sectors and jurisdictions, facilitating uniform communication of concepts among different actors.
Why is it doing these things?
The government links assurance to safety and to economic opportunity. Assurance is intended to reduce risk, increase user confidence, and accelerate adoption in both public services and the private sector. The report argues that a coherent assurance ecosystem will attract international firms and support the development of new services within the country. It draws a parallel with cybersecurity assurance where clear standards and market actors helped create a sizeable domestic industry. The government presents assurance as both a safety measure and a market development strategy.
How big is the AI assurance market in the UK?
The report estimates 524 firms supply AI assurance goods and services in the UK. Those firms produce approximately £1.01 billion in gross value added and employ around 12,572 people. Among them, 84 are identified as specialised AI assurance firms. The specialised firms account for approximately £360 million of the reported GVA. The study notes a rapid rise in specialised firms since a 2023 analysis that found 17 such companies. Those numbers show an existing base of capability on which the government seeks to build.
And how large could it be in the future?
The report provides a market projection that assumes action to scale demand and supply. Under those assumptions, the UK AI assurance market could exceed £6.53 billion by 2035. The government's position is that it figures into a broader projection that sees the UK AI market growing substantially by the same date. The implication is that assurance could become a significant commercial sector if barriers are addressed and buyers incorporate assurance into their procurement processes.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Why is AI assurance important for businesses?
Assurance helps businesses demonstrate to regulators, customers and partners that their systems meet the required standards. It reduces legal and reputational risk, and it supports informed procurement decisions.
For suppliers, it creates a potential revenue stream. For buyers and for the public, it provides tangible evidence that systems have been tested and reviewed.
The report emphasises that public sector procurement can drive demand by making assurance a condition of purchase, which in turn encourages private buyers to follow suit.
In which sectors is AI assurance most mature?
Assurance is most advanced in sectors with established safety and compliance regimes.
Financial services, life sciences and pharmaceuticals are at the front. The connected and autonomous vehicles sector also shows advanced practice because it builds on established safety testing and product certification. Those sectors have both technical capacity and a culture of external audit, which makes adoption faster.
Where regulation already demands rigorous testing, AI assurance fits into an existing assurance ecosystem.
What does the public feel about AI assurance?
The report summarises focus groups with 35 members of the public. Participants emphasised the importance of independent assessment. They expressed distrust of checks conducted only by profit seeking developers. The findings show that public confidence depends on perceived independence, transparency and the identity of the assurer; the public places as much weight on who performs the checks as on the content of the checks.
What are the current challenges and barriers to building an AI assurance market in the UK?
The report groups barriers into demand, supply and interoperability problems. Demand is weak because many buyers do not fully understand the risks associated with AI or the value of assurance. Supply suffers from limited access to models and the data needed for independent checks, as well as uneven quality infrastructure.
Interoperability problems arise because sectors and jurisdictions use different frameworks and vocabularies. The result is fragmentation that raises costs and limits trade for assurance providers. Commercial sensitivities and intellectual property concerns add further friction to meaningful independent assessment.
Why can't developers ensure the quality of their own AI products?
Self-assurance creates apparent conflicts of interest. Developers may lack the multidisciplinary skills needed for a robust independent review: buyers and the public place greater trust in third-party evaluations.
The report therefore endorses trusted independent assurance while noting that third parties require appropriate access to models and to data to be effective. Independence and expertise are the central rationales for separating development and assurance.
Take it further
Interested in AI? Explore BCS' books and courses.