On Monday 21 July, Dr Bill Mitchell OBE FBCS, Chair of the BCS Influence Board, attended a roundtable on trustworthy AI in public services, hosted by Feryal Clark MP, Minister for AI and Digital Government, at the Department for Science, Innovation and Technology (DSIT).
This was a representative gathering of expert voices from across civil society and policy, including representatives from the Ada Lovelace Institute, TUC, Institute for the Future of Work, Good Things Foundation, Institute for Government and the King’s Fund. The discussion focused on public perceptions of AI, the barriers to wider adoption in public services, and how to build trust in the responsible use of AI across government.
Representing BCS, The Chartered Institute for IT, Bill spoke directly to the Minister about our priorities, particularly the importance of public trust and the role of professional registration and membership in supporting ethical and accountable use of AI.
He has summarised the key points he made below.
Reflections from the DSIT roundtable, by Bill Mitchell
It was a privilege to represent BCS, The Chartered Institute for IT, at the Department for Science, Innovation and Technology roundtable on 21 July, chaired by the Minister for AI and Digital Government, Feryal Clark MP. The focus of the session was on Trustworthy AI in public services.
Here is a high-level summary of the points I contributed during the discussion:
- There is a danger that the public perceives AI as being developed by ‘tech bros’ and ‘script kiddies’ who have been handed a great big box of matches and let loose in a toy shop. This narrative is undermining public trust in AI systems.
- Government has produced excellent guidance on the professional capabilities and ethical behaviours required to adopt AI responsibly in the public sector — for example, the Model for Responsible Innovation and the AI Playbook. This guidance could form the basis of a pathway to professional registration, ensuring that public sector practitioners are accountable for how they implement this advice. That, in turn, would have a significant positive impact on public perceptions of AI trustworthiness. The most effective way to deliver this accountability is through professional registration with a recognised professional body.
- AI assurance is positive and necessary — but for it to be truly effective, it must be supported by the wider IT profession. This is essential to guard against gaming the system or engaging in “compliance washing” (a phrase used by the Ada Lovelace Institute in their Go Pro report).
- Finally, generative AI systems are inherently not trustworthy — they generate plausible but inaccurate information and present it as fact. These systems will only be perceived as trustworthy if the people developing, managing, operating, and maintaining them are themselves demonstrably trustworthy. One clear way to offer the public this reassurance is to ensure those professionals are accountable through registration with a professional body.
These reflections align with positions BCS has put forward publicly on multiple occasions, and I was pleased to reiterate them in this important forum. Of course, time will tell what impact they have, but it was heartening to see a range of expert views being heard at the table.