Grant Powell MBCS speaks to Robert Smithies, Discipline Lead for AI and Machine Learning, and Anna Lewis, Head of Data, AI and Technology Solutions at AtkinsRéalis to learn how AI and ontology-driven solutions are reshaping safety case management in defence and beyond.

Summary:

  • Safety cases, critical to risk assurance, have traditionally been labour and time intensive 
  • Digital Safety Case enables enginners to make decisions faster by surfacing evidence quickly and consolidating claims, arguments and evidence in one place
  • The model can expand into other critical infrastructure sectors such as transport and energy, improving efficiency and accuracy

 Safety cases have long been the backbone of risk assurance in safety-critical industries, but traditional methods are slow, manual and heavily reliant on human interpretation. AtkinsRéalis has taken a bold step forward with its Digital Safety Case initiative, leveraging AI agents and ontology-based frameworks to streamline evidence gathering, improve transparency and accelerate decision making. Here Robert Smithies, Discipline Lead for AI and Machine Learning, and Anna Lewis, Head of Data, AI and Technology Solutions, explain the challenges, innovations and future potential of their award-winning project.

How does the Digital Safety Case improve decision making compared to traditional methods?

Robert: Previously, safety engineers have had to exhaustively search through extensive documentation to find evidence supporting safety claims, a time-consuming process prone to ambiguity. The digital solution accelerates this by surfacing relevant evidence quickly and accurately to back up arguments per safety claim. Hence it doesn’t generate content nor make decisions itself; instead, it consolidates claims, arguments and evidence in one place, enabling engineers to make informed decisions faster. This structured approach also allows for continuous improvement through context engineering workflows that orchestrate dynamic prompting of agents and performance monitoring through classification metrics.

What was the biggest challenge in moving from paper-based safety cases to a fully digital system?

Robert: Safety cases are traditionally document-heavy and designed for human readability, which makes automation difficult. Diagrams and prose, as opposed to lists and rules for example, convey meaning to humans but complicate AI processing. The team had to work with these human-readable reports — the ground truth — and impart structure to these in lieu of structured data models already being available. Leveraging AI agents, specifically large language models, was key to overcoming this challenge.

How has the introduction of an AI and ontology-driven approach enhanced safety and compliance?

Anna: Ontologies provide a blueprint of ‘what good looks like’ for safe design and operation.

Robert: This structure enables AI to locate evidence supporting safety claims in design documentation and operating procedures. Ontologies also improve explainability, a critical requirement for responsible AI, by creating an audit trail that shows how conclusions were reached. The combination of AI and ontology ensures model explainability, supports gap analysis, and fosters continuous improvement.

What measurable benefits have you seen so far, and how do they compare to expectations?

Anna: Safety engineers have reported time savings of around 20%, which is in line with our expectations. The solution avoids false time savings by ensuring accuracy. If evidence isn’t available, then the system states so rather than fabricating information, which builds trust and prevents wasted effort. While some benefits, like improved safety assurance, are harder to quantify, they are significant for long-term risk reduction.

Defence projects often involve complex stakeholder networks. How did you ensure engagement and adoption across such a diverse ecosystem?

Robert: The team focused on collaboration and reassurance. The solution is for decision support, not a replacement of safety engineers. Development began on internal systems using declassified cases to prove the concept before moving to client environments. Integration with tools like Power Apps, used in ongoing work collating safety case data, was prioritised to avoid disrupting established workflows. This approach meant that we could maintain trust and demonstrate value without threatening established roles.

Where do you see the Digital Safety Case model being applied next, and what does this mean for future digital transformation in critical sectors?

Robert: The model can expand into other critical infrastructure sectors (like transport, energy etcetera), and into allied disciplines like model-based systems engineering and reliability engineering to digitally transform entire engineering asset management programmes.

Anna: These areas share similar documentation-heavy processes, making them ideal for ontology-driven AI solutions. The approach could also support project management and requirements mapping, reducing inefficiencies and improving compliance across industries.

How was the AI trained, and what role did existing safety cases play in that process?

Robert: The AI wasn’t trained in the traditional sense of creating a bespoke model from scratch because large language models are pre-trained by vendors; instead, it was configured and fine-tuned using real-world examples to validate performance.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Anna: Initially, the team used mocked-up nuclear safety cases because they were readily available and sufficiently suitable for proving the concept would transfer to safety cases in a secure environment.

These cases provided a rich source of structured and unstructured data, allowing the team to test how well the AI could surface evidence and classify the strength of supporting arguments of safety claims. Once the approach was validated, the focus shifted to defence-specific safety cases on client systems.

What role does the EU AI Act, and relevant standards, play in ensuring responsible AI use in safety-critical environments?

Robert: ISO 42001 provides a framework for implementing AI systems responsibly, aligning with obligations under the EU AI Act. While the solution doesn’t control safety-critical functions, it still follows best practices for AI governance. Techniques like structured inputs, dynamic prompting and model explainability ensure outputs are traceable and testable. Validation against existing completed safety cases, and a risk-mitigation strategy whereby a new version of an application is rolled out to a small, controlled group of power users before a full production launch, added further layers of assurance.

Finally, congratulations on achieving a Highly Commended status at the UK IT Industry Awards. What are your thoughts on this?

Robert: The recognition serves as validation for the project and its ability to leverage AI to transform ways of working — a critical need for UK industries facing skills shortages and high labour costs. The fact that the project has been highly commended by a panel of industry experts also helps to build confidence among our stakeholders and supports the case for scaling the solution.

Anna: In addition to celebrating digital transformation success and showcasing the power of advanced tech to improve processes, it is also a testament to the incredible hard work of the project team and their ongoing commitment to innovation in their field.