Charles Jackson, Solutions Architect at Accenture, talks to Grant Powell MBCS about how AI is reshaping cybersecurity, the emerging risks enterprise leaders must prepare for, and why dependency-aware recovery will define the next decade of resilient architecture.

Summary:

  • Understanding data flows and recovery priorities is key to making automated recovery viable after real-world incidents
  • AI is rapidly expanding the attack surface available to bad actors, while the rise in ‘shadow AI’ is increasing organisational vulnerability
  • As AI has bred more sophisticated attackers, the focus is now on effective containment, resilience and recovery rather than ‘perfect prevention'
  • Security architecture will shift towards policy driven models centring identity and authorisation, while cyber recovery will be continuously rather than periodically validated

Modern cybersecurity demands more than traditional defensive thinking — it requires a deep understanding of how systems interact, where dependencies create risk and how automation can strengthen resilience. As AI accelerates both opportunities and threats, organisations must move toward adaptive, policy-driven architectures, stronger identity controls and governance models with clear ownership. The core lesson is simple: to stay secure in a fast moving environment, organisations need resilient, dependency aware designs that support rapid, reliable recovery and reduce reliance on human judgment during moments of crisis.

Please introduce yourself and provide some background on your career.

My name is Charles Jackson, and I’m a Solutions Architect at Accenture working at the intersection of digital transformation and cybersecurity. I support government departments and enterprise organisations as they modernise and secure their technology estates. I actually started as an electrical and electronic test engineer, but quickly realised it wasn’t the direction I wanted long term. That led me into IT, starting in an entry-level role and gradually moving through positions in software analysis, software implementation, DevOps, pre-sales consulting and technical consulting.

Spending nearly eight years in one organisation gave me exposure across different teams and disciplines. That experience helped me understand the broader technology ecosystem and identify the areas I truly enjoyed. During that time, I completed a master's degree in information security and digital forensics, enabling me to specialise further. All of those steps — exploration, learning, and refining my interests — ultimately brought me into secure architecture, which is where I am today.

You’ve worked across multiple industries designing secure cloud and enterprise architectures. Can you share a recent project where cybersecurity and AI played a major role, and what made the solution innovative or challenging?

A good example is a recent cybersecurity recovery project I worked on, where the goal was to deliver a policy-driven recovery capability across multiple business domains. Unlike traditional backup models, this solution shifted towards automated, orchestrated recovery, enabling systems to be restored in the correct order based on real business dependencies.

One of the biggest challenges was that most of the dependency knowledge simply wasn’t formally documented. To address this, I led deep technical sessions to understand the true data flows, system integrations, hosting dependencies and recovery priorities. This effort was essential to make automated recovery viable in a real world incident scenario, particularly in the context of ransomware, where time and sequence matter immensely.

The innovation wasn’t just about the tooling; it was the move from human-driven guesswork to dependency-aware recovery, backed by structured governance and resilience-cost optimisation.

As AI becomes embedded in more enterprise systems, what challenges are you seeing emerge in cloud environments?

A major issue right now is the speed at which AI is expanding the attack surface. AI tools increasingly connect to emails, documents, ticketing systems and data platforms. If identity and access controls are weak, organisations can unintentionally expose sensitive information through the AI connectors they adopt

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Another emerging risk is around prompt and context security. What the model is exposed to, or manipulated into seeing, can directly affect how it behaves. If attackers can influence context, they can influence outputs.

I’m also seeing a rise in shadow AI, where teams adopt tools without the proper security review. This creates significant problems for data governance, compliance and auditability. In some cases, the organisation isn’t even aware of which data is flowing through which AI systems.

How has the rise of AI driven cyberattacks changed the way you approach architectural design and risk modelling?

The biggest shift for me is that I now assume attackers are both faster and more automated than before. They have access to many of the same AI capabilities we do. That means it’s necessary to spend less time designing perfect prevention and more on rapid containment and resilient recovery. I also place much greater emphasis on strong approval workflows and out of band verification, especially as deepfake enabled social engineering becomes more sophisticated.
In practice, I’ve leaned heavily into dependency aware recovery. Under pressure, especially during an attack, manual processes tend to fall apart. Clear documentation and automated orchestration become essential, so recovery doesn’t rely on human memory or assumptions.

From your perspective, what are the biggest gaps organisations still face in AI governance? How should they rethink security controls?

The biggest gap I see is unclear ownership. AI often sits between security, data and product teams. Without a single accountable owner, controls degrade very quickly.

Data governance is another area that hasn’t yet kept pace with AI, particularly across the connector and retrieval layers. Everyone is excited about pulling data from different systems, but if governance isn’t embedded, that flexibility introduces long term risk.

Many organisations do have policies in place to some degree, but they struggle to translate those policies into enforceable technical controls. AI systems should be treated like critical business services, with clear ownership, auditability, and well defined failure modes. Without that mindset, they remain fragile.

Looking ahead five to 10 years, how do you see the convergence of cybersecurity and AI reshaping enterprise architecture, and what new roles or capabilities will security teams need?

I see cybersecurity and AI becoming deeply intertwined. AI will increasingly act as both a defensive capability and a threat vector. Architectures will shift towards adaptive, policy driven security models where identity and authorisation form the core fabric. Cyber recovery will also become a continuously validated capability, rather than something tested only during planned exercises.

This shift will create new roles, including AI security architects, model risk specialists and cyber resilience engineers, who will blend deep technical knowledge with governance, risk, and process expertise.

What guidance would you give to CIOs and CTOs who want to adopt AI driven security tools but are struggling to balance innovation with risk, compliance and legacy technical debt?

My advice is to focus on outcomes, not tools. Start by using AI to augment people, especially in areas like alert triage and prioritisation. Only later should organisations move towards automation. Put guardrails in place early: least privilege principles, strong approval processes and robust audit logging.

Leaders should also align AI adoption with measurable resilience outcomes, such as improved detection speed or faster recovery. And when dealing with legacy estates, use control overlays first, rather than attempting big bang transformations all at once. Incremental progress is far more practical and far less risky.