While governments are eager to accelerate adoption of new technologies, including AI, they must be careful not to compromise trust along the way, writes Sharon Moore MBE CITP FBCS.

The need to build responsibility into the use of artificial intelligence (AI) is not news. Despite it taking a long time for many in the technology industry to take ethics more seriously — as colleagues at BCS, The Chartered Institute for IT will vouch for — we should always be responsible in technology at all stages of its lifecycle.

For governments this is especially true.

Trust in government is intrinsic to its role in society and is key to tackling the major challenges we collectively face: climate change, pandemics, financial crises and more. In practice, trust in government is linked to compliance with policy and regulations, to respect for property rights and privacy, and a reduction in the cost of the business of government.

Governments must strive to be seen as the most trusted institutions, yet the OECD trust survey of 2021 showed that just 40% have trust in their national government, with just 35% trusting their government in Great Britain. Edelman reported a more favourable but still low 52% globally trusting government in 2022.

So, while governments are eager to accelerate their adoption of new technologies, including AI, and incorporate the latest advances in generative AI and foundation models, they must be careful not to compromise trust along the way.

This age of AI is a real opportunity for governments to increase trust — empowering citizens, improving productivity and effectiveness, and reducing the ever-increasing operating costs of the public sector. In fact, it’s expected that 39% of public sector working hours will benefit from human-AI collaboration and automation, which is good news for citizens and government constituents.

Unlocking government-held data

Government agencies have many registers available to them: citizen records with details as diverse as permits and family relations; inventories of information about private and public assets such as properties, streets, traffic signs and trees. They hold data about businesses, migration, public safety, education and more. Much of it is siloed and disconnected but while this is by regulatory design in some circumstances, there is an opportunity to gain more value from the government-held data. AI — and generative AI with its greater capability for natural language processing — brings the potential to summarise large datasets, to extract insight, and to inform decision-making processes in citizen services, taxation and justice, among others.

The reduction in information overload provided by AI leads us closer to the operations goal that’s been in place for some time: to focus on the high value work, particularly to help those who need it most, while significantly reducing the repetitive and costly low-value tasks. More importantly, many services could benefit from a dramatic improvement in service response times, improving the satisfaction of citizens.

Everyday impact of governmental AI

In the early stages of the COVID-19 pandemic we saw government organisations around the word rapidly adopt chatbots to ease the burden of disseminating constantly changing information to concerned citizens. We can expect governments to continue embracing generative question answering to expedite customer (a deliberate word choice, as not all users of public services are citizens) enquiries such as ‘am I eligible?’, ‘how do I apply?’ and ‘what documentation is needed?’, providing context-aware, immediate, and personalised responses. These virtual assistants could be used to guide a person through the entire submission of an application.

Taking this a step further, AI offers governments the ability to be much more proactive in how they serve their public — to automatically identify their needs, determine services for which they’re eligible, and reach out with information about those services. How much simpler will this be for people who really are in need of support and don’t know what help they can get?

Digital transformation with humans at heart

Governments are regularly held back from true transformation by legacy systems with tightly coupled workflow rules that require substantial effort and significant cost to modernise. The code creation capabilities brought by generative AI will accelerate the conversion of legacy-based applications to more modern languages, with workflow and rules that are extracted and easier to manage. Instead of seeking hard-to-find skills, agencies can reduce their skills gap and tap into evolving talent.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Using generative AI to create synthetic data can help governments protect sensitive or confidential information during testing stages. It has the potential to remove inaccuracies and bias and improve representation in governments’ machine learning activities. Generative AI can augment DevSecOps and quality engineering with scripts and test cases.

Responsible AI also means that humans should continue to be at the heart of the services delivered by government. In fact, both the GDPR and the UK’s Data Processing Act of 2018 include safeguards related to significant decisions based solely on automated processing.

Employees are already seeing positive outcomes of AI in their roles. Most surveyed workers in financial services and in manufacturing said it had improved their enjoyment of their job, and over half have seen benefits in mental health too. Governments can take heart from these leading industries.

How much risk can governments afford?

Despite these positives, let’s not forget our opening thoughts: the concept of trust between citizens and their government is becoming more complex, and we are more forgiving of people than computers.

IBM has advocated for some time that governments must build five fundamental properties for trustworthiness into their AI lifecycles: explainability, fairness, transparency, robustness, and privacy. The latest progress in AI adds its own complication to the story: amongst the benefits, off-the-shelf, pre-built generative models also bring new risks. Many are built on publicly available data sources and internet data with inaccurate or incomplete data, can unintentionally propagate bias, disinformation, and other harms, and result in royalty and copyright disputes as well as other regulatory exposures.

Yet, there are too many compelling reasons for governments to avoid delay in embracing AI. We will see them build in-house models and work with trusted partners. In support, they must begin every AI initiative with infusing governance so that it is a conscious decision, not afterthought. Organisations such as the Centre for Data Ethics and Innovation will maximise industry skills for the benefit of government and citizens.

Perhaps the question should be ‘what is the risk to governments of not embracing AI’?

Sharon Moore is IBM’s CTO, Global Government (Civilian). The views in this piece represent her personal opinion, not IBM’s.