Jon G. Hall and Lucia Rapanotti from the Department of Computing, The Open University discuss the teaching of computing in schools.

The current UK debate on Computing in schools is driven by a specific issue: what should form the basis of a modern Computing education. It’s a perfect storm that has been brewing for some time, but the recent comments1 of Eric Schmidt, the chairman of Google, have moved it into the political arena.

Currently, the mandatory educational component of computing (from years 11 to 16) is taught as information and communication technologies (ICT) - effectively digital literacy - which sees computing, in a teleological light, as a tool that drives society.

Of course, digital literacy is an important topic that’s taught well. But the other half of the sky - the means of which computing is an end; the important problems it solves and the nature of its solutions; its science, its technology, its engineering and its real-world relevance - is still needed to inspire a new generation of thinkers and doers.

For the future state and sustainability of our discipline, a non-teleological view of computing completes the picture and so is of fundamental importance.

That non-teleological view is impossible to reach without thorough understanding; we posit that that understanding comes from, what we will term, the core generative problems that had to be solved for Computing to exist and that has thus driven Computing as a discipline.

And if we wanted to find computing’s core generative problems? For inspiration, one could look for the core generative problems of other subjects. Chemistry, for instance, is concerned with ‘atoms and their interactions with other atoms, and particularly with the properties of chemical bonds’2, the core problem being to allow the controlled investigation of their properties that results in the scientific method.

Similarly for Physics, which produces predictive theories for matter and energy. Mathematics is about analysis and abstract thinking; many of its core generative problems take the form of puzzles: the prime numbers theorem, for instance, or Fermat’s Last Theorem.

From the pedagogical perspective, without understanding the core generative problems, we can drift into what Howard Gardner, author of The Unschooled Mind, calls the ‘correct-answer compromise.’

This is, to paraphrase, an unwritten contract between student and teacher:

The student says: 'my school life removes me from the reality in which I live and as I cannot understand the relevance of much of what you’re teaching me; I will need other criteria for success. Therefore, I’ll say I’m succeeding if, no matter which test you, as a teacher, give me I can choose the right answer.'

The teacher says: 'I do not know the full context of computing, and therefore cannot teach you the problem-solving and critical-thinking skills that define computing and have changed the world.

However, you’re going to need to be assessed on your learning: I’ll teach you the correct-answers as determined by those that set the exams so you can pass whatever tests they throw at you, and so move on to where you’ll learn what computing is really about.'

In our paraphrasing of the correct-answer compromise we intend no critique of the student or the teacher: the teacher unversed in computing’s core generative problems3 can do little better than take a teleological approach; the student uninspired by dreams of the problems they can solve with computing knowledge will have no entrance to their detailed study. Neither is culpable.

Add to this the darker forces that make Gardner’s correct-answer compromise the most practical path: a (UK) educational system weighted towards the recording of exam performance with league tables, based thereon, determining how desirable a school is in a market of supply (schools) and demand (parents, not students).

Anyway, back to the core generative problems: computing is a young discipline. So young, in fact, that we are still exploring the problems, let alone determining their solutions. That means two things: we must be careful to understand that no single author (not least those of this article) has a complete, correct view of what they are; and there is a critical need for a debate over their nature.

With this in mind, there are three core generative problems we’d like to focus on:

External agency influence

This problem is to have a device that works to perform repeatedly and reliably the same operations over and over again, where the goal of those operations is determined by some external agent.

This is the problem that Charles Babbage overcame in his computations for the admiralty with the difference engine; it was also the problem that Tommy Flowers mastered in building Colossus, the world’s first programmable electronic computer, to read German war messages at Bletchley Park.

Amongst Babbage’s contributions were physical architectures by which cards can control cogs; Flowers contributed electronic architectures by which electrons can be cajoled into following certain paths under the control of programs.

Flowers work at Bletchley was, of course, coincident with Turing’s, who had already provided the new thought tools needed to enable us to think about the core generative problem of external agency influence. This motivates to a great extent mathematical approaches to computation, and so much of formal methods, concurrency theory, and related sub-disciplines4.

The development of logical and physical architectures, whether Von or non-Von Neumann, upon which successful computation can be built are here too, as are computational paradigms: imperative, procedural, logical, functional, object orientation, and the rest.

Coping with complexity

The second problem is how to grow a few lines of code to many lines - many millions of lines - so that useful computational work within a complex environment can be achieved.

Realisation that early techniques didn’t scale to complex situations was termed the ‘software crisis’ and in response we got software engineering, including, Royce’s contributions of process architectures by which many people could work together collaboratively on complex artefacts, and such processes have resulted in the most complex artefacts ever created.

From Royce’s work was identified the need for (software) requirements engineering. Thought tools were provided by others: high-level languages, software architectures, software patterns and other structuring mechanisms each being responses to environmental complexity as were databases, ERP systems, and the like.

Computation in complex environments also motivates many of the characteristics that software must have to justify its place there. Characteristics such as reliability, robustness - and so the sub-discipline of testing - also performance and efficiency - whence sophisticated models of concurrency - and usability, configurability and a whole host of other -ilities - which mandate consideration of the system as primary, rather than just the computational entity.

Coping with volatility

The last in this short list is the problem of building and working with delivered complex computational products in an environment that is highly volatile. Volatility arises for many reasons.

Stakeholders change the minds without warning, they discover new or different needs, the context of the system changes, etc. This motivates ever more complex and adaptive process and software architectures, the move to agile processes which permit quick responses to discovered change.

Volatility also motivates the need for flexible computational entities has motivated neural networks, autonomous systems, ontologies and the like. Coping with volatility is an area in which the process of discovery isn’t yet complete.

To understand computing’s response to the core generative problems is to understand computing. It provides a the historical context and the structure for its development, and how its elements interrelate: we couldn’t build complex computations before we could build computations; we couldn’t consider volatility in a complex environment until that complexity could be managed. The core generative problems determine the place of computing in the world, its scope and the real-world relevance of its sub-disciplines.

Finally, a teaching based on the core generative problems impacts the sustainability of computing as a discipline by allowing the student of computing initially to gauge whether their lies in computing or elsewhere, but then to dream of problem solving with the tools that computing provides, inspiring them to greater things.

The greatest computing minds have worked to solve these problems; perhaps still greater minds are yet to come. As educators, we should debate the nature of computing’s core generative problems. It’s a debate that’s much too important for politics to guide. Without it, computing could be lost to teleology for another generation.


  1. Schmidt said the country that invented the computer was ‘throwing away [their] great computer heritage’ by failing to teach programming in schools. ‘I was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools,’ he said. ‘Your IT curriculum focuses on teaching how to use software, but gives no insight into how it’s made.’
  2. Wikipedia.
  3. Perhaps because they have come from another discipline; Computing graduates that enter teaching are very few and far between.
  4. To the greatest extent, and with notable exceptions, Computer Science is Computing’s response to the External Agency Influence problem.