Artificial intelligence (AI) isn’t necessarily ‘new’, but new legislation from the EU and guidance from the Ministry of Defence (MoD) has renewed the excitement in the technology.

BCS recently welcomed speakers from the defence sector to give a lecture on developing AI to support robotic autonomous systems in the battlespace:

  • Applying AI for real world challenges in defence: how the sector can jump to AI as the answer, rather than question what the problem is they're trying to solve.
  • The future of AI: why we must ensure there is complete trust in the data that is powering AI technologies, and how to achieve that.
  • AI on the battlefield: real-world use cases for both weapons-based applications and 'behind the scenes' support to aid decision making in the field.
  • AI ethics and challenges for defence: because the current technology isn't capable of handling something as complex as the 'kill chain'.

Watch the video

 

To introduce the event, we welcomed Major General Collyer, Army Director of Information.

AI in the defence sector

AI is the hot topic on every organisation's agenda – including EU policy circles where the EU AI Act, the first law on AI by a major regulator anywhere, comes into effect. The new law assigns applications of AI to 3 risk categories:

  • Category 1: applications and systems that create an unacceptable risk are banned – for example, government-run social scoring of the type used in China.
  • Category 2: high-risk applications are subject to specific legal requirements – for example a CV-scanning tool that ranks job applicants.
  • Category 3: applications not explicitly banned or listed as high-risk are largely left unregulated.

But exactly what is AI?

It's perhaps the biggest problem that exists within the field of AI because there are hundreds of different definitions. It creates confusion around how AI works and gives rise to the idea that AI is some mystical force that can solve every problem.

At one extreme end of the scale, you have your Terminator scenario where the machines rise up to fight back against humans. At the opposite (and more realistic) end are the narrow, specific use cases.

Because there is no single, commonly agreed definition, it’s important to understand and communicate what you mean during an AI initiative to bring clarity to the project. The MoD recently published its starting framework - Defence Artificial Intelligence Strategy – which covers 4 core objectives to make it "the world’s most effective, efficient, trusted and influential defence organisation for [its] size":

  • Objective 1: transform defence into an ‘AI ready’ organisation.
  • Objective 2: adopt and exploit AI at pace and scale for defence advantage.
  • Objective 3: strengthen the UK’s defence and security AI ecosystem.
  • Objective 4: shape global AI developments to promote security, stability, and democratic values.

Throughout the event BCS welcomed a variety of speakers from the MOD, industry, and academia to consider different views on AI in the battlespace and present their ideas for what we should be thinking about in the future. The first session focused on what challenges we must overcome.

Applying AI for real world challenges in defence

Presented by Geraint Evans, Senior Principal Consultant at DSTL

AI is already in use by the military today. But rather than welcome an army of T-units to the frontline, the technology is more suited to removing the cognitive burden of data processing on the human being and transferring it to the computer system, to speed up or optimise, their decision making.

Data is critical to AI. It's also the reason that humans won't be replaced by titanium robots any time soon. AI technologies can ingest and understand vast quantities of data very quickly, but they can't always see data in context.

For example, if you were using an autonomous drone to take out a terror suspect, the data could indicate the suspect is identified, verified, and recommends action be taken immediately. But what the human sees is the suspect's innocent child in the firing line, and they may determine it's not ethical to act.

Therefore, rather than start with a specific AI technology (because AI isn’t always the best answer), it’s better to look at your data and determine the outcome you seek to achieve. This could be automation, smart analytics, computer visualisations, or learning.

Don't ask: "What can AI do?"
Do ask: "What is the problem?" AND "Is AI the right technology?"

“You don’t need to worry about AI. Let the use case determine the technology.”

If AI is the right technology, don’t feel like you need to start coding from scratch every time. Today’s AI technologies may be niche, but if you take an off-the-shelf product you can add code to enhance it. As well as achieve the outcome you’re looking for quickly and efficiently, it helps you to control the narrative and ensure you’re not overloading the project demands.

Assuming that AI is the right technology, our second session continued to look at what the driving force is behind AI in the defence sector, and the factors to be cautious of.

The future of artificial intelligence?

Presented by Prof. Nick Colosimo, BAE Systems

Futurism is a challenging area because nothing is certain, as well as there being several possible futures. And while we can imagine 'the rise of the machines', the more probable scenario is how we improve systems engineering – and therefore architectures.

When you think about our world today, there’s a great deal of information online regarding PESTEL factors, we are generating more data from sensors, and the research papers around AI are increasing in quantity and quality.

But data alone isn't enough. You need to know where that data originated from to ensure the source can be trusted. Additionally, we need to instil confidence in people so they feel able to challenge industry and academia to verify their claims so there is no doubt in the data.

Trust is driven by three factors: explainability, reliability, and predictability. So when using AI we need to ensure any results have:

  • Opacity: we need to be able to explain and unpick data so that when things go wrong, we understand what happened and why.
  • Brittleness: we need to filter through the noise to ensure the outcome is reliable with the AI interpreting the data classifiers the same way a human brain would.
  • Non-determinism: we need predictability in AI technologies, so any learned behaviours are driven by the right context.

“Because current use cases are so narrow, think about combining different forms of AI to increase the accuracy of the technology.”

From theory to practice, the next session summarised a variety of current use cases for AI in the defence sector, highlighting why the technology is best suited to the individual use case.

AI on the battlefield

Presented by Kisten McCormick, AI Lead/Systems Engineer, General Dynamics Mission Systems

So often when we talk about AI in defence the immediate thought is weapons-based use cases. But there are lots of applications for AI that wouldn't raise an eyebrow, such as translation, information extraction, speech to text (and text to speech), classification and clustering, planning scheduling and optimising.

“I’m interested in broadening the mind to look at applications in AI so it’s not just a buzzword - if you have data, AI can be applied to any system.”

AI is particularly important within the defence sector because it helps with accuracy and speed. Think about the sheer volume of data that's created every day, and how we need to distil insights to create an actionable outcome. The cognitive load required is too great.

Operators are already burdened under the weight of intelligence gathered – to add more data from sensors and outlets only makes it harder to understand. AI provides a security of operations advantage. By automating the data analysis, it speeds time to insights while ensuring no data is lost or forgotten in the process.

This can prove enormously valuable on the battlefield to aid decision making. Humans are the limiting factor in the decision-making process. From their operational environment to personal ethics, fatigue, stress, and level of empathy – it all affects the way we think and act. AI helps by removing those influences from the process.

For example, you can use AI to assist a human operator in the selection of weapons from multiple platforms to targets within highly volatile scenarios. The AI can ingest the data quickly to produce a recommendation, before asking the human operative whether they would like to accept. The AI has sped up the process, while leaving the final decision to the operative.

Our final presentation took a deep dive into the practical challenges that surround AI in the defence sector, and how to avoid a situation where machine intelligence is replacing human empathy.

AI ethics and challenges for defence

Presented by Prof. Peter Lee, Professor of Applied Ethics, University of Portsmouth

There is no international agreement on what AI is in the military. And one of the most complex areas surrounding the technology is ethics, because ethical decision making is a human function.

In the military, we have the ‘kill chain’, which for a reaper would be:

  • Prime Minister
  • Defence Minister
  • Chief of the Defence Staff
  • Chief of the Air Staff
  • AOC 1 Group
  • Reaper squadron commanders
  • Reaper operators

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

It creates a constellation of responsibility as decisions cascade down through the hierarchy. But ultimately, if the reaper operator doesn’t want to pull the trigger, they won’t. You can’t apply the kill chain to AI because the technology needs to flex to the context.

For example, you couldn't programme the AI with a blanket 'no kill' statement, because there are situations where killing an individual could be seen as ethical if it prevents a worse crime, like mass-murder.

Within the MoD's Defence Artificial Intelligence Strategy, it highlights the importance of human centricity, responsibility, understanding, bias and harm mitigation, and reliability.

It’s a welcome development to the industry because we’ve never had this level of guidance before. Best practice has been to choose a specific approach to demonstrate you’ve considered ethics – but even then, each approach will result in a different tolerance.

  • Virtue: for humans within a system with autonomous elements.
  • Consequentialist: legal and ethical judgements that are made about the anticipated outcome when compared to the actual AI outcome.
  • Deontological: laws and codes that frame deployment and use of weapons within AI-enabled autonomous elements.

“What is really needed is relevant and effective education training. Contracts may protect you legally, but ethically, you’re still culpable.”

Rounding off the event was an insightful Q&A session where the audience was invited to interrogate our presenters. The below is a summary of the discussions.

Q&A session

How does AI apply in highly regulated areas?
If the use case exists to use data, you should use it – even if the platform/system is highly regulated.

There is much hype about the applications for AI, and rightly a lot of excitement for what the future holds. However, just because we want AI to be a success, doesn't mean we should be afraid to say "No". At this point in time, we need to take each AI use case proposal and approach it with the right mindset – what is the outcome you're looking to achieve?

Are we in an arms race with China/Russia?
Technology doesn’t always give you the advantage. War is very complex and different countries approach it with a different mindset. For example, in the West the primary unit is focused on the individual. Whereas in the East, the primary unit is the State – which is why they will sacrifice vast numbers of people to protect the mother country.

In many ways it comes down to ethics. In the UK we have what’s referred to as ‘The Daily Mail Test’ – essentially if something goes wrong will it be splashed across tomorrow’s headlines? It’s the media and the general public that set the ethical bar, and then the Government’s aversity to collateral damage that will determine how AI is accepted in defence. And ultimately, AI will always be judged to a higher standard than humans are – it’s part of the challenge our sector faces.

How far is fiction in the development of science?
Today’s AI isn't advanced enough to allow us to delegate too much responsibility to the technology. The applications are too narrow, which makes the technologies prone to adversarial attacks where bad actors attempt to 'trick' the AI, for example painting a tank to look like a tree so it's not detected. Additionally, poisoning attacks, where data is injected with vulnerabilities, can affect the accuracy of the outcome.

Follow a design-led process, and you build resilience into the development so you reach a point where you know the output can be trusted, because you have implemented specific controls along the way.

What education do we need around AI?
One of the most dangerous ideas we need to get under control is the perception that AI technology is like magic and can solve any problem. We even hear politicians make statements like ‘blockchain can solve Brexit’s border issues’. And it comes back to the problem highlighted at the beginning of the lecture:

What is AI?
Thankfully the industry does have influence on what the politicians do with AI through groups, like the BCS AI community interest group and AI expert groups in Brussels. If we can help the lawmakers to get the fundamentals right, we’ll get everyone speaking the same language and more progress can be made.

Who has the capacity and skill to deliver AI in defence? And do we have the right people?
BCS has run its AI community interest group for 40+ years, but we can’t ignore the war on talent. New graduates emerging from university are in such high demand they can name their price, which is typically far more than the defence sector can afford.

Therefore, we need to be creative in how we approach future talent. If you can find people with more generic IT skills – like testing, infrastructure, systems architecture, and design-led principles – you can train them to understand how the military works, and train them in how to understand AI so they can speak the language with confidence. Rather than look to hire in specific skills, which are then difficult to retain, we need to invest in creating the right environment to support and upskill people. Find out more about BCS AI certifications

BCS offers a range of certifications in AI to help your teams level up. As well as aligning to SFIAplus, a globally recognised skills framework, you join a global community of 60,000 members who are committed to advancing our industry. As a member of BCS you can also join specialist groups and branches, gain access to mentoring, and we provide everything you need to facilitate continued professional development.
Become a member of BCS