Andrew Lea FBCS, with the help of the BCS SGAI, ask why it’s hard to be concrete about what AI is and why this lack of consensus matters.

Artificial Intelligence (AI) is hard to define because the term is intrinsically ambiguous. AI differs fundamentally from ‘ordinary’ software in ways which are not obvious, even to the informed layman; it exists at different depths, and it’s changing furiously, making it all too easy to talk and even think about AI at cross purposes, based on unspoken assumptions.

Clarity of thought on AI is essential if we are to make the changes AI brings beneficial to all.

Here, we look at why AI is intrinsically ambiguous, and what its fundamental characteristics are. We consider three depths of AI, and how it and perceptions of it are changing. This article is a part of the debate on the nature of AI and its effect on society, and we end by considering its implications.

Ambiguity

‘Artificial Intelligence’ is an oxymoron. Although artificial, an aeroplane flies; it does not ‘artificially’ fly — the flying is real. Just so with intelligence: if it is exhibited, it’s real, and if it’s a machine exhibiting that intelligence, it is no less real. Calling it artificial invokes a cognitive dissonance.

So if not ‘artificial’ intelligence, what is AI? It’s Machine Intelligence: machine intelligence performs tasks which, if they were to be performed by a human, would require intelligence. However, we will use ‘AI’ as it is the label people know.

Fundamental characteristics

Up to four characteristics differentiate AI from ‘ordinary’ software:

  1. AI is often non-deterministic: many important techniques need randomness to work, so output varies even with the same input. Formal methods can prove the predictability of structured deterministic programs, but not of AI
  2. AI may be intended to work in new circumstances: both a boon, because it can work in unforeseen circumstances, and a curse because full testing is impossible
  3. AI can learn from experience: this is incredibly useful, but this ability to autonomously change its responses based on what it learns makes full testing difficult if not impossible
  4. Powerful AI may deduce its own knowledge representations and reasoning techniques, teaching itself how to think

Depths

AI can exist anywhere on this spectrum:

General, strong or deep

This is the strongest form and original meaning of AI. It aims to parallel human thought with intelligence and self-awareness, sentience, ethics, emotions, humour, and all other aspects of human consciousness. These are the characteristics that many imagine AI to have: self-aware, thinking robots.

The term “Artificial Intelligence” is inappropriate here not only for the reasons outlined above but because intelligence is only one of the necessary attributes (and would be real, not artificial). We could call it “machine sentience”.

Applied AI

Applied AI focuses on intelligence alone, ignoring other attributes of thought, It is a collection of techniques — often inspired by natural intelligence or systems — designed for specific tasks. Such systems are usually what is meant by ‘AI’.

A good working definition of applied AI is: the acquisition, manipulation, and exploitation of knowledge by systems whose behaviours may change on the basis of experience, or which are not constrained to be predictable or deterministic.

That (applied) AI knowledge can be:

  • Embodied in program code, such as positional knowledge lovingly hand-crafted by programmer-artisans into a ‘traditional’ chess program
  • Explicitly captured from experts in a knowledge base (effective, expensive and decreasingly popular)
  • Acquired through machine learning: ‘trial and error’ reinforcement learning, ‘see what patterns you find’ unsupervised learning, or ‘learn from these examples’ supervised learning, of which ‘neural nets’ — inspired by brain neurone layers — are popular

Hybrid approaches, e.g. expert-validated machine learning, work well. Some large ‘pre-trained’ models use this approach to a surprising degree.

Algorithm intelligence

Complex ecosystems of software systems can exhibit emergent behaviour, or intelligence. Just as ant colonies exhibit more intelligence than individual ants, AI-behaviour can emerge from complex ‘ordinary’ software systems.

Changes and perception

Until recently, once an AI technique was established, it was no longer perceived as AI; knowing how the rabbit is pulled out of the hat destroys the magic. This was the de-facto moving goal-posts definition of AI: that which a computer can’t do.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

AI used to be ‘wide but shallow’: horizontally applicable, but not powerful, such as a 1990s multi-lingual summariser which, though effective, had little idea of what it was writing. Alternatively AI could be ‘deep but narrow’: powerful only on tightly related problems.

The art of the computer scientist is explored in Professor Wirth’s influential eponymous book, Algorithms + Data Structure = Programs. But some AI systems are now either creating algorithms and data structures, or acting as if they have:

  • Generative Large Language Models (GLLMs) address different problems simply by changing the natural language prompt, and can write programs
  • Genetic Programming evolves new programs, which sometimes no-one understands
  • Symbolic Regression uncovers relationships in scientific data, which they express as formula or terse programs

GLLMs have changed perceptions: AI can at last do things again, and AI systems which invent programs (‘self programming’ computers?) are both wide and deep. Some even give an appearance of edging up from machine intelligence towards sentience; should accidental or deliberate machine sentience arrive, we won’t necessarily understand or even recognise it.

With greater public understanding of AI capabilities, the label ‘AI’ is less frequently used simply to glamourise mundane software — though it remains a popular buzz-word, replacing the meaningless ‘big data’.

Implications

AI discussions often conflate its three depths. Overloaded terms help marketing, but hinder understanding: ‘deep learning’ means a neural net with more than three levels, but is often misunderstood as ‘profound learning’.

When systems make decisions, explainability becomes important when welfare is at stake. Explainability is the AI equivalence of human accountability. Arguably there is a need to make GLLMs explainable. Unfortunately, by their very black-box (neural net) nature they are not. Powerful AI (which learns its own knowledge representations and reasoning techniques) might be necessarily intrinsically opaque with unexplainable decisions.

Misunderstanding AI characteristics can lead people to try regulating AI techniques — but it is only the system effect that might be regulated, not the means used to achieve it. A wrongly declined mortgage has equal impact whether due to a requirements mistake, biased dataset, database error, bug, incorrect algorithm, or misapplied AI technique. Regulating AI as if it were ‘just’ clever software would impinge on the fundamental characteristics from which its capability flows, and inhibit its benefits. A reasonable requirement would be that any system, not just AI, which impinges on welfare must be able to explain its decisions.

Conclusion

As a colleague observed, defining AI is like defining time: we all think we know what it means, but it is actually hard to pin down. Just as our understanding of time changes — appropriately enough, with time — so AI itself may cause us to change our definition of AI.

About the author

Andrew Lea (FBCS), with the connivance of the BCS AI interest group - based on his four decades of applying AI in commerce, industry, aerospace and fraud detection - explores why AI is so hard to define. He has been fascinated by AI ever since reading Natural Sciences at Cambridge and studying Computing at London University.