Phil Hall, technical consultant and owner of Elzware Ltd, considers the use of AI in healthcare with reference to some of his own work, and looks at what still needs to be done to remove current barriers to real innovation in this space.

AI in healthcare is not new. From the rule-based expert systems like MYCIN in the 1970s to the rise of neural networks and today’s generative models, the idea of machines helping doctors has been a persistent vision. With each wave of enthusiasm, expectations surged — only to be tempered by the complexity of clinical practice, ethical nuance and systems integration.

We are now entering what is being touted as the most significant era of healthcare AI. From radiology to remote triage, some AI tools are no longer experimental — they are live, embedded and often invisible to the patient.

The promise of AI is not about necessity; there is no unleashing of data or revolution coming. This is just hype and hyperbole. That said, what we build, how we measure success and who benefits still depends on design choices, trust frameworks and a long term view of social value. To navigate this, we must balance technical capability with human-centred thinking and bring all parties to the table, not just the hype-meisters obfuscating the naked and partially clothed emperors.

What AI in healthcare looks like now

Elzware’s own contributions highlight the value of focused, relational ‘hybrid’ AI, by which is meant a blending of generative and declarative AI methods. The LANA system, which evolved into mimi.coach, supports individual training of professionals through reflective, human-guided dialogue based around motivational interviewing as a methodology. This is ‘people driven AI’ — personal, ethical and human scale.

When thinking about what is needed in the NHS on the basis of the huge amount of work that it does, the propensity to think ‘wow what a lot of data, I’m sure AI is good to solve all possible problems’ should be updated. How about, ‘there’s a lot of data, there’s a lot of process, let’s look underneath AI to foundation terms like voice recognition, or business process automation, or knowledge management, or computer vision.’

Regulation, accuracy and governance of AI in healthcare

Healthcare AI must meet high standards of clinical safety, data governance and accountability. In the UK, regulation is evolving (https://tinyurl.com/5n6cfm6v). The MHRA is piloting sandbox environments for AI as a medical device (AIaMD). ISO 42001 is helping organisations adopt responsible AI management frameworks. NHS England and NHSX (now part of the NHS Transformation Directorate) emphasise explainability and human-in-the-loop (HITL) models. 

Yet there needs to be understanding of the users at core, so along with some professionals from the University of the West of England in Bristol, care providers, the local Integrated Care Board (ICB), hospitals, technologists and communities of real people, Elzware started a piece of work.

 Elzware’s Relationally Oriented Motivational Interfaces (ROMI) research project, funded by the National Institute for Health and Care Research (NIHR), investigated a ground-up approach with collaborative development around multi-cultural and socially contextualised support for people with type-2 diabetes. We were shortlisted to apply for a PDA award after successfully completing a protype in Hindi, Urdu, Arabic and English.

The proposal for the PDA was supported to even more depth and was rejected for reasons that I will not share in this article, though am happy to discuss directly if there is interest. Just reach out.

Research-led innovation: The ROMI project

ROMI was built on a simple question: what if AI could help people articulate what they’re struggling with, and why? Unlike predictive models that suggest the ‘next best action,’ ROMI focused on the motivational landscape — what people avoid, how they express readiness for change and how language reflects agency.

The research drew from multiple NHS sites and community partners. It used a mixed-methods design: qualitative interviews, usability trials, behavioural analysis and linguistic analysis. The agents were based on a novel architecture but adaptive, based on scenario logic and conversational design, drawing from techniques in behavioural activation and motivational interviewing.

The 2025 NHS 10 Year Plan highlights a University of Sheffield NIHR study using conversational AI for personalised depression care (https://tinyurl.com/4amjum25). This activity deserves review, but sits on the history of commercial entities like Woebot and Wysa — or one could track these interactions back to Eliza in the 1970s. Let’s be frank, depending on the state of mind of a person the I Ching can give you good advice as long as you’re open to receiving an input, any input, to move  your own internal processes towards wellbeing forwards.

Feedback, resistance, and trust

Every new technology in healthcare generates tension. For AI, clinician concerns often centre on safety, job impact and black-box decision making. Patient concerns relate to privacy, loss of human contact, and fairness.

In our work, resistance tends to drop when systems are clearly scoped in collaboration. Neither mimi.coach nor ROMI are built to ‘be’ doctors. They were framed as companions or aides. AI at its best, and correctly framed and understood, is a tool. That distinction matters; clarity of purpose builds trust while focus on the latest rhetoric from tech bros running tech companies does not.

Clinicians involved in ROMI appreciated the idea of AI offering support between appointments, especially for patients who struggle with verbal expression. In care settings, mimi.coach users noted that the system “waits for me to find my words,” which was contrasted positively with time-pressured training sessions.

Trust is earned not only through performance but through emotional credibility. Systems that listen well, respect user boundaries, and don’t make promises they can’t keep are more likely to be welcomed by both staff and patients.

What’s missing: infrastructure, evaluation and human design

While the UK has invested in digital transformation, healthcare infrastructure remains uneven. Some trusts use modern electronic health records with integration-ready APIs. Others still rely on paper and disconnected systems. This limits AI deployment, no matter how ‘clever’ the tool.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Another barrier is evaluation. For AI to be accepted in healthcare, it must be trialled in real-world conditions. This means ethical approvals, patient recruitment and longitudinal assessment. Many organisations make progress, but most AI innovators still struggle with the time and cost of validation.

Finally, there is a design gap. Many AI tools are still built by engineers without the input of frontline staff or patients. This leads to products that are technically impressive but contextually irrelevant. Elzware’s approach has been to embed co-design at every stage, building with, not for. This takes time and ‘social’ science, but it leads to systems people might actually use.

What do we want from AI in healthcare?

The future of healthcare AI will not be won by the application of calculus on big data alone. It will depend on usefulness, humility, and fit. We must build systems that understand the social nature of health, the emotional complexity of illness, and the pressure under which clinical and non-clinical staff operate.

Built correctly and transparently, conversational AI can triage, transcribe or monitor — but it can also listen, prompt and reflect. It can surface insight without demanding attention. It can help patients prepare for appointments, reflect between sessions, or manage uncertainty.

Elzware’s work, particularly through LANA, mimi.coach, and ROMI, shows that AI doesn’t have to be massive to be meaningful. It just has to be accountable, transparent, and people-driven. In the end, we should ask of any healthcare AI: does it increase agency? Does it earn trust? And does it understand the human at the other end of the screen? If the answer is yes, it’s worth building.