Climate change is accelerating — and so, it seems, is the spread of false narratives. AI offers promising tools to counter climate disinformation, but the future of climate action might depend on how responsibly we use them. Tim Clark explains.

Our climate is changing. Human activity since the industrial revolution is the primary cause. Concentration of CO₂ in the atmosphere has risen from about 280ppm in the pre-industrial era to over 420ppm today. This has driven an unprecedented rise in global temperatures. In the UK, all ten of the warmest years on record have occurred since 2002.

There is an ever more pressing need to implement policies to prevent further climate change and to mitigate damage. This is why misleading narratives can be so toxic, as they delay or even prevent action. Misinformation is often the result of the spread of false narratives by well intentioned individuals who have made a mistake or believe they are sharing truth, but there is a greater threat: disinformation. Disinformation is purposefully created to be misleading. It can amplify arguments that resonate with climate sceptics and doomers alike. This narrative shift allows those behind climate change to duck their responsibility, while delaying much needed action.

How can we combat this disinformation?

Climate conversations are rarely straightforward. There is a wide variety of disinformation, and different stories will resonate with different people. One story I heard was that the climate wasn’t warming because on a particular day it was a chilly -4°C in winter. I heard another person say we should stop boiling kettles, as water vapour traps more heat than carbon dioxide. While the first seems to come from common confusion between climate and weather, the second is more interesting. Because it's true.

Water vapour absolutely traps more heat than carbon dioxide, because it's more abundant. However, it is regulated naturally and cycles through the atmosphere rapidly. Carbon persists for centuries, resulting in long term temperature rise. This rise increases the amount of evaporated water present in the atmosphere.

That’s what I wish I’d said. Instead, this novel line of scepticism was new to me, and I was unable to challenge it in the moment. This is where AI can help.

Large language models (LLMs) have proven themselves useful in supporting a range of tasks. They could play a valuable role in combatting disinformation. There are three areas that will be essential to preventing disinformation causing wide societal harm: creating counter narratives, preventing further spread and driving wider education.

Myths to facts

LLMs can create tailored fact-myth-fallacy-fact responses to specific pieces of misinformation — even tailored to specific people. Rather than blanket debunking which can be easily dismissed by endless ‘what if’ scenarios, we engage an individual and address all their personal concerns with a custom response.

The above response to the water vapour false narrative was written with the help of ChatGPT, which provided an instant, tailored counter narrative. It’s common now for chatbots to link to their sources, bolstering their argument and nudging their user towards reliable information. They can search the web to validate sources and verify the provenance of facts or quotes. If misinformation is attributed to a trusted source, the real source can be identified and another myth debunked. Their conversational ability allows them to engage someone in their own language, or allow an interlocutor to tailor a response.

These tools could be employed by social media sites to provide a counter narrative in-situ without needing to suppress information. Outright censorship doesn’t allow people to reconsider their beliefs, and it can even strengthen them. Censorship accusations, conspiracy theories and a Streisand effect can amplify false truths.

Containing the threat

LLMs can also play a role in preventing the wider propagation of mis- or disinformation. Even if a counter narrative doesn’t make someone reconsider sharing, suspected misinformation can be dealt with in other ways.

Some social media websites already flag posts that have been forwarded or shared many times, to avoid unreliable information or chain messages being propagated. LLMs can build on this, fact checking or prioritising content to be manually checked.

By rapidly processing large volumes of text, AI can monitor the spread of particular themes in disinformation. This could equip governments, NGOs and any website with user-generated content to create targeted campaigns against certain sources or narratives.

Shaping society

LLMs can explain complex scientific, economic or political concepts to ordinary people to simplify research that may seem opaque. Whether in a specific context or simply supporting education, people being more critical in their analysis of potential false narratives is critical to combatting the wider problem.

LLMs may ‘prebunk’ disinformation, priming people with facts before they encounter it, making them less susceptible. As disinformation rises, such a vaccine may prevent an epidemic.

The dark side

Unfortunately, there is another side to this discussion. AI can play a dangerous role in creating and helping to spread disinformation. Social media sites are rife with deepfakes and AI generated content that convinces many of false narratives.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

There is already evidence of this double edged sword being used by nation states to push propaganda. AI agents can engage with content: reacting, commenting and sharing via fake accounts to boost content in algorithms.

There is also a risk that hallucinations can link to poor sources, or even provide an incorrect counter narrative which is easily debunked. Evidence suggests talking to LLMs can help shift someone’s opinion — particularly dangerous if it is confidently spouting false information.

AI is also heavily biased by the creators who train the model in the first place, shaping its viewpoints. Reports from outlets including Scientific American indicated that Grok, a chatbot created by Elon Musk’s xAI, recited fringe climate talking points. Given the authority that people are starting to place in chatbots, this could not only propagate false narratives, but even help to persuade people of their validity.

AI can bolster our fight against AI generated disinformation. The same techniques used to flood timelines with falsehoods can counter them. Informing people of the facts and tracking trends in disinformation could be critical to quickly preventing its spread. In the same way AI agents can boost false content, they could also respond in-situ, directing people to reliable sources at the point of exposure.

The wider climate

There are obvious applications beyond climate change, especially given that politics is becoming increasingly partisan and divisive. Climate is not the only issue where powerful businesses, individuals and governments want to set the narrative. Disinformation will continue to be produced, now at the speed of AI.

We should all be concerned about the potential for misinformation and disinformation to lead to election interference, life threatening anti-vaccine rhetoric and even dangerous conspiracy theories like Q-Anon or Pizzagate that lead to real-life violence.

Another tragic consequence of increased false information is a ‘liar’s dividend’, where people struggle to identify the truth in all the noise. Politicians and others can take advantage of this to attack inconvenient truths as fake news. This will lead to decreased trust in official sources, or even in science in general.

There are no easy answers to fighting false narratives. It will require sustained efforts from each of us to push back. We will probably end up having some uncomfortable conversations. But, LLMs can prove a valuable ally to bolster valid narratives, identify and prevent the spread of misinformation and track down and discredit sources of disinformation.

All of us should be concerned about the rising tides of fake news, and the potential harm it could have. AI could make the problem worse. But, when used correctly, is could also be part of the solution.

Timothy Clark is a full–stack software engineer, Chair of the BCS Early Careers Executive and Chair of the BCS Preston and District branch. He is also a regular ITNOW columnist. Contact Tim via LinkedIn.