A meander through ITNOW and the Computer Bulletin’s new online archive reveals a long-lived fascination with chess and BCS insights on chess and technology integration.

The historical role of chess in computer science evolution has long been recognised, as has the relationship between the game and ITNOW magazine (formerly The Computer Bulletin).

In August 2024, Andrew Lea from the BCS Specialist Group on AI (SGAI) explored this connection in his article Computer Chess: A Historical Perspective. He described the impact of chess on artificial intelligence development and on computing at large. Early chess engines introduced foundational concepts such as the MiniMax algorithm and alpha-beta pruning, which became essential for search-space exploration and procedural knowledge representation.

It’s easy to trace how chess engines influenced modern AI algorithms. As developers tackled the complexity of chess — generating moves and evaluating millions of positions — they refined techniques that would later become central to broader AI applications. These engines demanded highly efficient code, often prioritising speed over readability, mirroring the challenges faced in compiler and operating system development.

The evolution from handcrafted evaluation functions to self-learning systems, like AlphaZero, marked a shift toward adaptive intelligence. Projects such as Deep Blue and Chinook didn’t just play games — they redefined human-computer interaction and highlighted the intricacies of reliable software engineering. Techniques like caching with transposition tables, iterative deepening and robust self-testing were honed through chess AI. Though no longer at the cutting edge, traditional chess AI laid the groundwork for many modern AI practices.

The history of the future

John A. Mariani also explored this rich history in an ITNOW article speculating on the future of computing in the year 3000. Grounded in 1989’s view of the future, the article begins by referencing Marvin Minsky’s definition of AI as ‘the science of making machines do things that would require intelligence if they were done by men.’ 

Mariani also revisited the Turing Test along with a chess-themed variation where Grandmaster Helmut Pfleger played 26 opponents simultaneously. After winning 22 games, drawing two, and losing two, he was asked if he had noticed anything unusual. He had not. It was later revealed that three of his opponents had hidden radio receivers and were playing moves dictated by computer chess programs. One of these, Belle — developed by Ken Thompson — defeated Pfleger. Unknowingly, he had been beaten by a machine posing as a human .

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Going further back, in December 1978, a BCS contributor writing under the pseudonym Aleph Null offered a humorous yet insightful take on computer chess in a piece inspired by Bill Hartston’s How to Cheat at Chess. The article critiqued the limitations of early chess programs, suggesting that many developers lacked deep understanding of both chess and the machines they used.

Yet, it also marvelled at the existence of early devices like Chess Challenger. The piece recounted mishaps in competitive computer chess, such as the British champion program MASTER’s struggles in the world championship due to transmission delays. Aleph Null also highlighted the absurdity of judging AI solely by performance. 

It also explored monster chess, a chess variant where white, armed only with a king and four pawns, compensates for the material disadvantage by making two consecutive moves per turn, while black plays with a full set of pieces and follows standard rules.

The article concluded with praise for the text-based game ADVENTURE, which, unlike many formal AI programs, engaged users in problem solving while subtly teaching computing concepts. Beneath the humour lay a genuine appreciation for how games can illuminate both the strengths and shortcomings of intelligent systems.

Chess, pattern recognition and reasoning

Even earlier, in December 1976, Donald Michie explored a novel approach in his article An Advice-taking System from Computer Chess. Rather than programming every detail, Michie proposed giving computers general strategic ‘advice’ and letting them determine the best moves. Chess, with its complexity and well documented strategies, was an ideal testing ground. His team developed an ‘advice language’ that encoded chess knowledge into rule tables. 

These helped the computer recognise patterns in endgames and set goals — like keeping a rook safe or forcing an opponent into a corner — without prescribing exact moves. A separate search module then tested possible actions to achieve these goals.

This method proved significantly faster than traditional programming, reducing development time from months to days. Michie hoped that this advice-based approach could extend beyond chess to fields like medical diagnosis or engineering, offering a more intuitive, human-like form of reasoning. It was seen as a step toward making computers genuine partners in problem solving, demonstrating the value of the relationship between chess and computing innovations in software engineering.