BCS were in the room for the latest All-Party Parliamentary Group (APPG) meeting on AI this week; it was a genuinely fascinating discussion on AI in the context of military and defence. Evidence was presented by representatives from the Armed Forces, industry and academia and dealt with the geopolitical situation, ethical considerations and the UK’s current capabilities.
Geopolitics and Sovereignty
A packed room above the House of Lords heard special contributions from a representative from the Polish Ministry of Digital Affairs, Dariusz Standerski, PhD who claimed that he wasn’t scared of drones, but he was scared of algorithms.
Mr Standerski stressed the strategic vulnerabilities of European technology – a dangerous reliance on US big tech and an inability to compete with Russia and China on algorithmic narratives, stating that, “we don’t have to tools to counter foreign influence.”
One way in which Poland has taken action is by investing in sovereign national infrastructure such as that which underpins their Digital ID card system. This, Mr Standerski claims, can only be done safely because of the existence of domestic tech companies and targeted state interventions. As for big tech in Poland, while their influence may be significant, their accountability and collaboration is not – the nearest ‘X/Twitter’ staffer lives in…Berlin.
There was consensus across the panel and in the room on the need to invest in sovereign AI capabilities, the answer to interoperability claimed Colonel Hugh Eaton, “isn’t to buy other people’s stuff”.
The Ethical Question
A highlight in the evidence was the perhaps counterintuitive argument that Lethal Autonomous Weapons (LAW) could actually be more ethical than human equivalents. This was a contribution from Professor David Whetham, Professor of Ethics at King’s College London, whose research found that LAWs demonstrated a stronger adherence to British Army values than British soldiers without compromising military effectiveness.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Liberty Hunter, Researcher at the Ax:son Johnson Institute claimed that AI can add operational capacity to an overstretched Royal Navy, helping to protect key national and international assets such as undersea cables.
Rob Bassett Cross, CEO of Adarga, a defence-focused British artificial intelligence software company, endorsed a more data-driven approach within Defence, saying that Britain has the most complex and comprehensive AI ecosystem in the world without its own infrastructure. Achieving this, he says, would need greater collaboration with industry.
More Skills, More Training
A key ethical and philosophical question was mooted related to the accountability of the ill-defined ‘autonomous’ weapons systems. Who is responsible in the case of an AI system that goes wrong – the frontline soldier? The commanding officer? The developer? The engineer? The question put paid to the idea of AI as a fast way of getting round a problem and means that those operating these systems need more training, not less.
The session highlighted that AI in defence is not a shortcut. Complex systems demand clear lines of accountability from design and development add the way through to operations - and more rigorous training. For BCS, this underlines the importance of professionalism, standards, and skills development in any high-impact application of AI.