On 19 May 2025, BCS’ ISSG held a hybrid event at the BCS London offices, featuring a variety of speakers discussing the impact of disinformation on everything from financial markets to global democracy. The event especially welcomed Yehven Vladimirov from the International Cybersecurity University Ukraine.

Covering topics from BCS’ policy recommendations to financial forensics, the evening event built to a fascinating talk from Ukraine’s Yehven Vladimirov on what Ukraine has learned about understanding and combatting disinformation.

Truth matters

BCS’ Senior Policy and Public Affairs Manager, Claire Penketh, opened the event with a talk on the potential of disinformation to undermine democratic processes. She quoted a BCS members’ survey taken before the last UK election which found that 65% of members believed AI generated disinformation could influence election outcomes, and that 92% support the idea that political parties must publicise their use of AI in campaigns. The survey also found that only 8% of members felt confident that major tech companies would honour their agreement to take reasonable measures against AI generated disinformation the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. Claire also discussed the Turing Institute’s latest report, AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections, which suggests deepfakes and disinformation have ‘deeply concerning’ potential to undermine democracy, as even satirical content contributes to general cynicism and political disengagement.

She commented that though BCS’ unwavering support of Ukraine remains, what has changed is how disinformation has strengthened bad actors such as Russia’s toolboxes — threatening not just Ukraine, but democracy worldwide. She outlined in broad strokes BCS’ recommendations to the UK Parliament, including improving public education, strengthening the Online Safety Act to specifically address disinformation, and investing in AI-powered tools to track foreign disinformation campaigns.

Claire ended her talk by emphasising the importance of truth, saying ‘this is an arms race’, and encouraging everyone to be vigilant and focus on developing strong defences against disinformation.

The speed of deception

Steve Sands, Chair of the ISSG, reflected on the extreme difficulty in combatting mis- and disinformation due to the speed at which it travels, quoting Mark Twain: ‘A lie can travel halfway around the world while the truth is still putting its shoes on.’ He commented that with the advent of disinformation and deepfakes this only rings truer, saying; ‘Today, lies travel at the speed of milliseconds. Whether it’s misinformation or disinformation, the velocity is alarming.’ 

"Today, lies travel at the speed of milliseconds. Whether it’s misinformation or disinformation, the velocity is alarming."

Steve reiterated BCS’ support for Ukraine and policy recommendations as outlined by Claire, highlighting the importance of improving public education and digital literacy, strengthening technical solutions, enforcing platform accountability, and addressing foreign state interference. He urged attendees to read the full BCS policy documents, and highlighted previous BCS events and materials focused on disinformation such as Disinformation Kills: Now What Are We Going to Do About It?, and The New Frontier: Cybersecurity and Disinformation in Russia’s Invasion of Ukraine as useful resources.

Framing the disinformation crisis

Mike Lloyd opened his presentation by positioning Ukraine as global leaders in the fight against disinformation, saying that although it’s a global issue, for Ukraine it is ‘a matter of life, death and existence as a state’. He is careful to point out that disinformation as a weapon is nothing new, citing Augustus’ smear campaign against Marc Antony as just one example — but that the difference is the speed at which disinformation can now travel, with algorithmic amplification allowing it to spread up to six times faster.

Mike also outlined some statistics highlighting the real world impact of disinformation:

  • Global vaccine disinformation costs between 1.2 trillion and 8 trillion a year
  • India 2019 25% political news fake
  • Far-right disinformation from within and outside the UK significantly contributed to the 2024 riots
  • Deception based cyber crime is worth an estimated $8 trillion a year

Mike presented two key issues when combatting disinformation. Firstly, fact checking is not only slow but often ineffective; people’s willingness to believe disinformation is much more to do with emotion and social identity than logic. Secondly, disinformation is lucrative for the platforms sharing it;  platforms including Facebook and X earn $2.6 billion dollars a year from advertising related to sensationalised content. While some progress is being made, such as new EU legislation which aims to prevent profiting off of misinformation, we ‘currently have more questions than answers’.

AI: accelerating the disinformation crisis

Helen Oluyemi, Security Lead at Pollinate, spoke about the role of AI in accelerating the information crisis, naming disinformation as ‘one of the most pressing threats to democracy today’. She highlighted how challenging it is to determine the truth in a world flooded with conflicting information, and framed a key question: how do we defend truth in a rapidly evolving digital world?

‘The battle for trust is now fought in pixels, algorithms, and attention spans’, reads one of her slides. Helen outlines how AI is worsening the disinformation problem with two key elements:

  • AI enables hyper-personalisation of disinformation, amplifying confirmation biases and making it harder for individuals to discern objective truth. This accelerates polarisation and emphasises social divides.
  • AI bots can flood platforms with disinformation in minutes, overwhelming human based reactive fact checking and verification models – we have to be using AI powered anti-disinformation measures and tools.

Ultimately, she explained, generative AI is supercharging the creation of fake content, empowering attackers to create more sophisticated threats and fakes. Those trying to defend against disinformation are always going to be a step behind because they are concerned with ethical restraints, making the playing field uneven. However, Ukraine’s front line AI propaganda resistance models are helping shape global best practice, and Helen advocated for a proactive rather than reactive approach to disinformation, offering two key questions moving forward: what tools are best for defenders, and how do we make them accessible, ethical and scalable?

Financial forensics in the war on disinformation

Chris Lewis, Head of Solutions at Synectics Solutions, opened his talk by presenting a comparison of how long it has taken new communicative technologies to gain 100 million consistent users globally. While the telephone took 75 years to reach that milestone, the mobile phone 16, and the internet itself seven, TikTok took only nine months and ChatGPT just 45 days. This means people’s personal information is available and exploitable like never before. 

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Chris explained that the UK public sector alone loses between £33 billion and £59 billion annually to fraud annually — the equivalent of one HS2 project per year. Chris told us that over 40% of reported crimes in the UK are fraud, and 70%+ of those contain a cyber components. Furthermore, synthetic online identities have seen a 300% rise in the last three years. Social engineering, or deception, is the biggest tool perpetrators have at their disposal.

Tools at the financial sector’s disposal, Chris explained, include data intelligence and sharing, AI enabled point of application screening and  transaction monitoring, and transaction traces and audit trails. One interesting element currently being explored is how to bring the concept of marked bills, a technique used to bring down money laundering operations in times gone by, into the modern era – can digital transactions be marked in such a way, and if so, how?

Similarly to Helen, Chris remarked that one of the biggest problems the financial sector faces in combatting disinformation is that good actors are constrained by ethics, regulation and responsibility to their customers. He states that although there has been more positive progress against cyber threats in the past two years than in the preceding decade, it is still glacial compared to how fast the threat develops.

Ukraine’s advances in combatting cyber warfare

The highlight of the evening is the informative presentation given by Yehven Vladimirov, giving a comprehensive overview of what cyber warfare is and how it can be combatted. Yehven opens by demonstrating that Ukraine is the first country in history to have to defended itself in all five NATO recognised domains simultaneously: land, sea, air, space and cyberspace (cyberspace having been recognised in 2016).

Russian cyber attacks on Ukraine began in 2014, long before the full scale invasion of 2022; for example, in 2015/2016 an attack targeting the energy sector left millions of Ukrainians without power, and the infamous Petya/NotPetya attack of 2018 caused administrative chaos and political instability. Attacks have however increased since invasion; in 2022, 2192 Russian cyber attacks were launched against Ukraine, and in 2025 there have already been over 4000.

Yehven presented three key characteristics of how cyber warfare develops that Ukraine has learned:

  • It starts in peacetime with influence and disinformation
  • It escalates through cyberattacks on critical infrastructure
  • It becomes kinetic — but never loses its cyber dimension

Setting out Ukraine’s next steps in defending themselves against cyber warfare, Yehven explained a new initiative: Disinformation Indicators Sharing Platform (DISP). The inspiration for this is the existing Malware Indicators Sharing Platform (MISP), which is already actively in use by the EU, NATO, various governments and banks, and was adopted by Ukraine in 2017. MISP is the global standard for sharing cyberattack indicators; open, community driven and formally structured, it creates a repository of knowledge which makes malware detection and mitigation faster and easier. DISP would follow the same principles of unified terminology, interoperability and quality control, and would follow the same process:

  1. Detection of disinformation by trained citizens via OSINT
  2. Submission through a dedicated form, users provide evidence such as links or geotags 
  3. Verification by moderators and AI tools 
  4. Sharing on a protected database accessible to journalists, educators, researchers, public awareness campaigns and government agencies

The goal, Yehven explained, is to ‘create a digital environment and digital ecosystem where disinformation is systematically recorded, classified and leveraged for training, response and policy development.’ 

He characterised DISP as a strategic response to hybrid threats which focuses on anticipation over reaction, increases civic empowerment and centralises knowledge.

Yehven ends on a simple, powerful sentiment: war can be prevented if we win the information frontline.