The latest BCS CIO Network meeting in London provided a platform for C-suite executives, board members and senior digital & tech leaders to discuss and debate strategic AI adoption across their organisations, and the UK as a whole.
Following the event, key speakers appeared in a special edition of the Gem of All Mechanisms podcast discussing how AI use can be transformed into real organisational impact as well as sharing their thoughts on the day.
Listen to the podcast here: Rethinking AI strategy for the enterprise
Opening address
The event began with an introductory interview with new BCS Group Chief Executive Sharron Gunn FCA MBCS, conducted by Claire Penketh, BCS Senior Policy and Public Affairs Manager.
‘As someone with a background in chartered accountancy and leadership roles at organisations like Sainsbury’s and ICOW, I’ve spent much of my career at the intersection of professional development, digital transformation, and commercial growth in the UK and internationally’, Sharron began. ‘Over the years, I’ve often been called in when IT projects were in trouble, working alongside tech teams to get things back on track.’
Sharron went on to recount experiences that opened her eyes to the transformative potential of technology, across law, finance, and insurance.
‘What struck me most’, she continued, ‘was the recurrence of some key questions. How do we prepare the next generation? How do we evolve our training and education models to reflect a world where careers shift every 18 months, and how do we bring tech into the boardroom, not just the IT department?’
Sharron stressed the importance of rethinking how we support professionals from the start of their journey, through apprenticeships, early memberships, and continuous learning.
‘Technology is central to every business, but it must be used responsibly. This means protecting young users, supporting teachers, and ensuring lifelong digital skills development.’ Sharron concluded, ‘At BCS, we have an opportunity to shape the future; to support careers, influence policy and champion ethical, inclusive innovation. It's an incredibly exciting time to lead, and even more exciting to collaborate with all of you.'
Unlocking value beyond hype
The day’s panel took to the stage next. Hosted by BCS Director of Corporate Engagement, Julie Bailey, the panel comprised of Leanne Allen, UK Head of AI at KPMG, Dr Bill Mitchell OBE FBCS, BCS Influence Board Member, and Sean Rafter, Head of Curriculum Development, Data and AI at BPP.
A theme that emerged right from the start of the discussion was the disconnect between the urgency of businesses to adopt AI, and a clear understanding of its strategic value. According to a recent survey cited during the session, 64% of CEOs admit to investing in AI to avoid falling behind, this is despite lacking clarity on its actual impact.
Leaders warned against the temptation to view AI as simply another line item in a digital transformation roadmap. Instead, AI demands a fundamental shift in business models, operational thinking, and organisational culture.
‘This isn’t just a platform conversation, it’s a business transformation challenge.’
The discussion emphasised the distinction between analytical AI, which is focused on finding correct answers, and generative AI, which aims to produce convincing, human-like outputs. Misunderstanding this difference can result in flawed strategy and misplaced expectations.
A key piece of advice from this initial discussion was that we need to be careful how we think about AI, namely, to avoid anthropomorphising it; essentially attributing human traits, emotions, or intentions to a non-human entity.
‘These are statistical tools, not decision-makers. Framing AI as a thinking entity can lead to overreliance and poor governance.’
Managing ROI expectations
With pressure mounting to deliver short-term returns, many organisations are scaling proof of concepts prematurely. The panel highlighted that scalability, trust, and architecture are often the true barriers, not the technology itself.
One speaker illustrated the power of showing real-time applications to build internal buy-in:
‘When teams see how something like Copilot can be used on a task relevant to their role, trust starts to grow. But people need to be brought along on the journey.’
The clear takeaway was that AI is not plug-and-play. Real ROI comes when companies balance speed with the often slower yet necessary work of building capability, trust, and governance.
On the theme of how to continue innovating during uncertainty, the consensus was to keep moving forward, but strategically. Responsible AI adoption requires measured steps and leadership capable of aligning AI use with ethical standards and risk frameworks.
Leadership and culture
Speakers unanimously agreed: AI adoption will rise or fall on the strength of organisational culture. Several ‘cultural red flags’ were identified, including:
- Treating AI like a ‘quick-fix silver bullet’
- Deploying AI tools without preparing or involving the workforce
- A lack of clear understanding of what ‘responsible AI’ involves
A recurring insight was the importance of trust, not only between leaders and employees, but between organisations and the public. Trust becomes especially important when workforce skills are being restructured around AI.
‘CIOs should build a culture of experimentation, but with guardrails. Let employees safely ‘play’ with these tools while you invest in structured development.’
An AI-literate leadership team, it was suggested, must demonstrate the following:
- Clear understanding of different types of AI and their limitations
- Ability to communicate failure modes and mitigation strategies
- Familiarity with the principles of responsible AI and regulatory expectations
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
AI as an engine for growth
A discussion on growth focused on AI’s potential to unlock competitive advantage, with references to major economic projections: McKinsey forecasts a 22% GDP uplift by 2030, while Microsoft estimates AI could contribute £550 billion to the UK economy by 2035.
Yet, speakers cautioned that the ‘race to adopt’ can create more noise than value, especially when implementations are not built on sound data foundations. The biggest opportunity lies in agile, multi-model data environments that enable organisations to build AI workbenches, test capabilities, and scale responsibly.
‘Data must be seen not as a compliance burden, but as an innovation asset.’
Data bias was raised as another important consideration. Organisations must take deliberate action to ensure their training datasets are representative, clean, and ethically sourced, or risk embedding discrimination into their algorithms.
Talent pipelines and workforce readiness
The panel explored how apprenticeships and upskilling initiatives could help prepare the next generation of AI talent. The Apprenticeship Levy was recognised as a useful funding mechanism.
But it was also explained that transformation isn’t something that happens overnight, particularly when it involves bring people along on the journey from a skills perspective:
‘You can train 7 million people in an hour, but that doesn’t make them skilled. Real transformation takes time, and depth.’
Beyond technical skills, the panel concluded that organisations must confront data maturity challenges. Many leaders still struggle to model or even admit their own data literacy gaps.
The advice was practical: start now, even if your AI ambition is paused. Investing in data literacy, infrastructure, and ethical awareness lays the groundwork for later success.
Governance, resilience and security
The conversation turned to the operational realities of deploying AI at scale. While AI can drive efficiency, service resilience and cybersecurity must not be compromised.
Leaders discussed the trend toward hybrid cloud and on-prem solutions to mitigate cyber risk. Additionally, some warned of operational blind spots where AI is integrated too quickly into core systems without adequate safeguards.
A consistent theme was the need to rebrand governance as a strategic enabler, not a blocker. Strong governance enables organisations to communicate transparently with regulators, employees, and shareholders, particularly when failures occur.
‘We need to be able to fail gracefully. That’s part of responsible innovation.’
Closing reflections
As the event due to a close and the panel were thanked for their time, they were asked to leave the audience with one piece of advice. Their responses captured the tone of the day:
- ‘Invest in data’ - Data maturity is a prerequisite for meaningful AI progress.
- ‘Treat AI with exceptional care’ - As with any powerful technology, misuse can undermine trust and long-term value.
- ‘Focus on people’ - Culture, trust, and inclusion are as important as platforms and models.