A Chris Ambler, author of new BCS book The Psychology of AI Decision Making, explores the importance of human psychology in creating AI systems.

On the surface, human thought and AI computation look the same. They both take inputs, process them, and deliver outputs. But if you dig deeper, the similarities start to disappear.

Human decision making is steeped in context, emotion, bias and lived experience. Meanwhile, AI runs on data and rules, following logic without ever grasping the moral weight of its choices. That difference becomes crucial when we examine how decisions are framed and try to work out who bears responsibility for them.

Psychology has become critical to creating AI systems because, at their core, these systems are meant to interact with people, make decisions that affect lives, and often shape behaviour in subtle ways.

Different approaches to moral decision making

One area to consider is deontological theory. From this perspective, decisions are not just about outcomes but about duties, principles and rules — like telling the truth, keeping promises or protecting life. To us humans, these are seen as obligations, regardless of the consequences. Although not perfect, we can appeal to these duties when faced with a dilemma. The problem is AI cannot do this. It only 'knows' the principles coded into it, and it has no innate sense of duty. If we expect an AI to act on principle, we are really expecting the designers to have anticipated every moral edge case. That’s a dangerous assumption, because life is far messier than a ruleset.

Utilitarianism offers a very different lens. Utilitarianism is outcome driven in that if an action produces the greatest good for the greatest number, then it’s seen as the right choice even if some individuals lose out. In AI, this difference matters because we have to ask: do we want systems that calculate efficiency at all costs, or ones that follow principles even when it’s inconvenient? This is where psychology is critical to creating AI systems because, at their core, these systems are meant to interact with people, make decisions that affect lives, and often shape our behaviour in subtle ways.

The role of psychology

Understanding how humans think, along with the impact of our biases, emotions and decision making shortcuts is critical. It helps to ensure that it is designed in a way that aligns with how people actually behave, not how we assume they will.

With this in mind, psychology will likely become a vital component of a business analyst’s (BA’s) role. For that reason, business analysts would benefit enormously from training in psychological principles. The role of the BA has always been to bridge the gap between business and technology. In this new world, they must also connect human cognition and computational design. Business analysts can also be part of the process of selecting data for both training and testing by recognising bias, context and possible nudges, which could create the subtle influences that distort results that numbers alone might not reveal. Business analysts translate cognitive patterns, human needs and moral and ethical considerations into relevant rules and requirements needed for AI systems.

Human pitfalls in an AI context

Take healthcare as an example. When an algorithm is designed to triage patients, it could end up disadvantaging a particular group — not by intent, but because enhanced learning has absorbed historic bias hidden in the data. In law, the same risk plays out differently in that a sentencing system that applies rules with machine-like consistency might look fair, yet miss the human context and nuance a judge would normally consider. Recommendation systems nudge us constantly, shaping what we watch, read, or even believe. At scale, small design decisions can cascade into large, unintended consequences. The butterfly effect can create massive ripples from the smallest of changes.

A further danger lies in anthropomorphism. Do you ever wonder why all the publicity videos showing how AI can 'help' us always take the form of humans? We instinctively project human traits onto machines. We call chatbots 'friendly' and we say an AI 'decided' or 'understood'. But AI does not feel, empathise or care. Treating it as if it does is misleading and risky. If we start to believe that an AI is capable of compassion or duty, we are more likely to hand over decisions that demand human judgement. The more human-like an AI appears, the easier it is to forget that its 'thinking' is nothing more than computation and this creates a false level of trust.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

This raises some complex questions such as should we create AI to simulate empathy in mental health applications if it risks fooling vulnerable people into thinking they are understood? Should autonomous cars be programmed for utilitarian outcomes, or should they be made to follow deontological rules about never intentionally causing harm? Then there is the addition of virtue ethics, which takes into account personal traits like responsibility, wisdom and justice. This can make things even more complicated. Should AI nudges be designed for efficiency, or for fairness — even if it costs more? These aren’t just technical debates. In real life, they are moral ones.

And that is the heart of the challenge in front of us. Emulating human decision making risks importing all our flaws and biases. Simulating decision making risks building systems that appear to be rational but contain no context, empathy, or principle. They must demand a level of governance, creativity, and humility that we are only beginning to develop.

Closer collaboration between psychology and computer science and, for example, between the British Computer Society and the British Psychological Society is vital. Only by combining technical expertise with an understanding of human morality can we begin to design AI systems that respect the complexity of decision making — because real life decision making is never just computation. It is also flavoured by duty, context, and compassion. And AI must never be allowed to make us forget that.

The Psychology of AI Decision Making is available through the BCS Bookshop.