Ian Hughes MBCS, Chair of the BCS Animation and Games Specialist Group, examines how artificial intelligence has been working as an unseen force in the games industry for a number of years.

Artificial intelligence (AI) is a broad term covering a multitude of techniques and expectations. It is said that if you are looking for funding, call it AI, and if you are hiring, call it machine learning (ML).

In games the illusion of intelligence, as players are engulfed in narratives and emotional experience, is often spun up above its actual technical level, but still interesting things have evolved with games and the AI field.

Ghost in the machine

Space invaders, 1978, was definitely not what you might call AI, as the waves of pixels chunked left and right across the screen steadily crushing the player into oblivion. You feel as if they are coming for you, a relentless never-ending machine wave that could not ever really be stopped, there to cost you another 10p in the slot.

1980 and a yellow dot-eating Pacman is trapped in a maze and chased by ghosts. These ghosts had simple rules to follow, but they started to seem as if they were working in unison to hunt you down and catch you out.

Certainly, compared with the Space Invaders, these colourful cartoon eyed characters were a little smarter for any regular player, but to the experts they were the expression of a pattern-based algorithm that could be exploited to get big scores.

Pathfinding, Pacman ghost-style, continued as a primary way most in-game characters operated, such as the swarms of monsters in the late 80’s classic, Gauntlet. The years marched on, just like the space invaders.

Games got better, and arcades dwindled in favour of home computers and consoles. I found, and still do, as a programmer playing games, that if the narrative or action is done well enough it suspends disbelief, though a sneaking thought of ‘ah, that’s how that works’ still pops into my head.

It’s alive!

1996 saw a breakthrough in the use of AI techniques to create artificial life (A-Life) with the game, Creatures. This experience was one of hatching ‘Norms’ and teaching them, through your actions and with trial and error, to live in their simulated world. It used a form of neural networking to build the training models for the creatures. Each were made of algorithmic DNA in many possible combinations.

The initial gene pool was decided upon by running simulations over many hours and selecting the algorithms that survived through natural selection. To many players the clever implementation of AI was not obvious, but it was a major step forward in AI use in gaming.

In 1998 it was the first-person shooter, Half Life, that made me do an AI double take. Enemy soldiers, coming at me in a building were heading through an obvious front door, but then the squad adjusted its tactics and chose a different strategy. As usual I died and respawned. I tried again. Once more the squad adjusted to my change of tactics.

I deliberately kept not completing that part of the story in order to explore the capabilities of the team of characters in that situation. I am under no illusion; this was not a set of sentient AI characters, but it did mark a ramping up of elements of the non-player character (NPC) abilities in-game.

By 2001, Black and White, Peter Molyneux and Lionhead’s ongoing development of the god sim genre, had a set of tribal characters responding to a large creature. This monster was, itself, partially under the player’s control, but had to be trained and nurtured and its tantrums dealt with. A-Life concepts were mixed with advanced AI elements such as belief-desire-intention (BDI), neural networks and complex decision trees to deliver an award-winning experience.

Puny humans

Home-based systems using elements of AI were matched with higher-end corporate research into the subject. In 1997 IBM’s chess-solving super computer, Deep Blue, beat Grandmaster Gary Kasparov, primarily with brute force forward-processing. It has been suggested by several sources that it was a bug that led to a seemingly random move that was mistaken for intelligence and threw Kasparov off his game.

This spurred the company on to create its Watson machine to play the word-based TV game show, Jeopardy. The show requires the ability to create a question from an answer, a complex problem to solve. In 2011 the machine beat two human champions and sparked IBM’s ongoing drive into cognitive computing today.

Alphabet (owner of Google) have been exploring AI through the company, DeepMind, and its software, AlphaGo. The board game, Go, has many more permutations than chess, and existing computing power does not allow brute force solutions in the problem space, so it requires AI techniques to solve. Continued development saw it beat a professional Go player on a full-size board in 2015, and the following year it beat a 9-Dan professional four to one.

In 2017, after a competition where it beat world number one, Ke Jie, AlphaGo was awarded the top 9-Dan status. It should be noted that Demis Hassabis, who co-founded DeepMind, was the lead AI programmer on the previously mentioned Black and White and executive designer of BAFTA nominated games Republic: The Revolution and Evil Genius. In 2005 he returned to academia to gain a PhD in cognitive neuroscience before forming DeepMind.
The company seeks to ‘solve intelligence’ and then use intelligence ‘to solve everything else’.

The walls have ears

Not all gaming AI techniques are solely directed at the inner monologue of the poor NPC as it toils for our pleasure in the game environment. Games designers want to provide an engaging experience, one that the player sticks with or constantly returns to.

I mentioned the 10p-draining Space invaders - well now with AAA games at £70 a pop and the potential for in-game transactions to generate profits, this is married with the ideals of the games designers seeking player enjoyment and constant play. AI techniques are now used to ensure that the challenge of a game adjusts to the skills and ability of the player.

Sometimes called dynamic game balancing (DGB) or dynamic difficulty adjustment (DDA), the game environment’s puzzles find ways to understand the player and make constant tweaks up or down to keep it a challenge. Games are generators of flow, the human mental state where everything is ‘just right’. Too much frustration or boredom kills flow. Simple games can just count the fails, and many still just get increasingly more difficult over time, as that’s easier to develop, but AI has a good future here.

Drive safely

One further use of games and AI twists the concept. Game environments are being used to help train new AI’s. Researchers have been using Grand Theft Auto V to help autonomous vehicles recognise and cope with a constant changing world.

DeepMind and Elon Musk’s OpenAI have publicly released game-based AI code examples to help other developers explore the subject. Deepmind’s world for training is based on the nearly 18 years old Quake III Arena. OpenAI’s Gym code includes 59 legacy Atari games including Pong, Asteroids and, yes, versions of Space Invaders and Pac Man.

The circle of life

We have come full circle. The earliest games used basic algorithms and a few tricks to play on our emotions. Games are evolving to keep us engaged, playing the player with AI.

Also, AI’s are learning to do whatever they do, by playing games, from the most basic ones that we started our journey on, up to current state-of-the-art AAA games. Some of these AI’s, just like humans, will grow up to start building elements of games themselves.

A dystopian sci-fi vision would see the machines enslave us, locked into play. A utopian view sees us getting amazing and interesting game and entertainment experiences elevating the art form. The reality will, of course, be somewhere in between, and most definitely exciting for players and observers.