Charles Ross HonFBCS discusses the exciting possibilities of the convergence of mathematics, physics, computer science, philosophy and cognitive neuroscience, following the presentation in March 2015 to the Real Time Club and the Royal Institution by philosopher Daniel Dennett.

The group that started the British Computer Society in the early 1960s sensed that a computer was somehow more than just another machine that could do calculations.

The term ‘electronic brain’, used by Lord Mountbatten to attract attention when he was President; the promotion of research into ‘artificial intelligence’; the audacious concept of a computer participating in, let alone passing the ‘Turing Test’ - all suggested here was a game changer of massive potential.

Over in the scientific community Crick and Watson had only just discovered deoxyribonucleicacid, DNA, and launched biogenetics. In the rest of the scientific community studying the brain was out of fashion.

Much had been discovered about neurons, axons, dendrites and synapses, but how memory worked, and the processes of thinking, creativity and particularly consciousness was considered the ‘hard’ problem, and at least one President of the Royal Society considered studying the subject was career suicide. In 1979 Douglas Hofstadter published Gödel, Escher & Bach, which helped set off new interest in cognitive neuroscience.

The cognition problem

The development of the word processor enabled everyone to write papers on every conceivable subject and then the internet enabled everyone to publish their papers. The Cybernetics Machine Specialist Group started to debate cognition. Surely there was a similarity between the hardware of a computer processing various software programs and the neurons of the brain processing electrochemical patterns: what we call the mind.

Both are processing information. We had invented software, and were beginning to discover that something very close to software drives our brains. And, lo and behold, something remarkably akin to software turns DNA into amino acids, proteins and you and me. Suddenly these various disciplines are beginning to converge. In March the Real Time Club and the Royal Institution invited Philosopher Daniel Dennett to present his views on convergence from the point of view of cognitive neuroscience.

In fact ‘convergence’ has come full circle. His chapter on programming in the book Intuition Pumps and Other Tools for Thinking is an excellent primer for the non-specialist**. He also makes several salient points on convergence.

‘Our present transistor-based systems,’ he says, ‘are only the current “hardware” that runs our “software”. Researchers all over the world are racing to replace them with...biological mechanisms using DNA and proteins.’

The information

Two Oxford physicists - Distinguished BCS Fellow, David Deutsch and Chiara Marletto - have remarked upon the trend: ‘When we consider some of the most striking phenomena permitted by the laws of physics - from human reasoning to computer technologies and the replication of genes - we find that information plays a central role.’

Biogeneticist J Craig Venter has written recently that we have now entered what he calls ‘the digital age of biology’. Here, he says, ‘the once-distinct domains of computer codes and those that program life are beginning to merge.’

The fascinating nature of this particular convergence is also noted by Edinburgh physiologist Jamie A Davies. Exceptionally valuable insights into human development have, he says, ‘been contributed by researchers in fields that might seem at first to have nothing to do with the topic, such as mathematics, physics, computer science and even philosophy.’ Microsoft and UCL have announced a joint program to explore ‘the convergence of carbon - and silicon-based life forms - the “interactome”.’

It is exciting that mathematicians and philosophers are working with biologists and computer scientists. But our excitement should go deeper than that. For the last fifty years the world has thought of computers only as useful information processors. However, just as Galileo looked through his telescopes and Hooke through his microscopes, computers also seem to give us a means of seeing things we have not been able to see before, creating a new opportunity.

This is not just about collaboration between fields; it seems there is genuine convergence. It is becoming clear that many of our most complex challenges are, in fact, just one challenge: understanding information processing, whether it is carried out by the human brain, the universe, DNA or a silicon-based computer.

Research is uncovering remarkable similarities between three computing systems in particular: the coding, structure and systems used by DNA to build living organisms; the processes that enable the brain to develop memory, learning and thinking; and the way we program computers to carry out ever more sophisticated tasks. It seems likely that greater exchange of ideas and closer cooperation will enable these disciplines to learn from each other and expand everyone’s knowledge.

Of the three systems, computing is the one we understand best because we have built its code from the ground up. In his 1936 paper entitled ‘Computable Numbers’, computer pioneer Alan Turing outlined how he should be able to perform almost any calculation with a computer using just three instructions: add, subtract and jump to the next instruction.

These instructions are compiled into a coding system we call ‘software’ (to differentiate it from the physical ‘hardware’). Thus was born the science of software, and it seems that studying how computer code works might be our best hope of gaining insights into the functioning of the other two systems.

Our growing familiarity with the concept of software has perhaps inured us to the fact that software is little more than information management - and this is a task also carried out by the machinery of life.

Eight years after Turing’s breakthrough, the Austrian physicist Erwin Schrödinger published a controversial book called What is Life? Here, he applied the rules of physics to the thorny issues confronting the biologists of the day and came up with a remarkable hypothesis. Life, he said, had to obey the laws of physics that govern the entire universe.

That must mean that chromosomes contain ‘some kind of code-script determining the entire pattern of the individual’s future development’. He even suggested that the code could be as basic as a binary code of the kind that Turing had used to create the era of mechanical computing.

The code of life

Following Crick and Watson’s discovery of the structure of DNA, we now know that Schrödinger was right. Complex strands of DNA are made up of billions of base pairs or four nucleotides (guanine, cytosine, thymine and adenine) in the double helix structure.

Triplets of these pairs form ‘codons’. Patterns of codons specify a ‘gene’, with ‘start’ and ‘stop’ instructions to define their beginning and end, and determine which are used for which purpose. Thousands of genes make up the twenty three pairs of chromosomes that specify the complete genome that creates a living organism.

This is the code of life. But, as with an unpowered electronic computer, this DNA code can do nothing by itself; it is inert. Only when electrochemical energy passes through is the code accessed. This happens through the creation of RNA, which enables the genes to generate twenty amino acids.

These specify patterns of enzymes which construct proteins that, in turn, build the cells that carry out every function in the whole body. Thus, through this hierarchy of structures, or subroutines, RNA converts one long string of nucleotides into the most complex systems on earth.

Unlike computer code, the genetic code appears to have the ability to randomly change itself. In some cases, these are the result of transcription errors when the code is ‘read’ by the transcription and messenger molecules in the nucleus. If this ‘error’ leads to a significant physical change which enhances the organism’s chances of survival, the mutated gene will be passed on and become part of the species’ genetic inheritance.

To fully understand a code we need to know the function for which it was designed. Computer code is written to enable lumps of inorganic materials to perform large, often repetitive calculations and tasks quickly and accurately. DNA’s main function is to reproduce life forms that will be successful in their environments.

The brain’s coding appears to have the purpose of extending the efficacy of DNA’s replication function. Our brains provide three extra resources to this end.

First, it seems that inherited DNA coding has equipped some organisms’ brains with hardwired instructions that give it instinctual knowledge of a few survival tactics specific to their native environment. Second, the brain can strengthen the organism’s chances of success by coordinating its motor resources to respond to danger and seek out and exploit opportunities for sustaining life. Third, it enables the organism to learn from experience.

Brain code

This multi-functional system we call the brain, then, hosts the most interesting and difficult-to-access coding system of all. The basic mechanisms are broadly understood.

The brain’s neurons are connected by a gap or ‘synapse’. Messages, in the form of electrochemical pulses, are input through ‘dendrite’ filaments. These run from every organ in the body and other neurons to the nucleus, and output in the form of electrochemical pulses from the nucleus along ‘axon’ filaments to every muscle, gland, organ and other neurons to stimulate an action response.

There are some similarities with better-known forms of computing. Though the neuron nuclei initiate signals, the operation of the synapses is reminiscent of transistors (with analogue rather than digital signals).

However, as we understand it, the brain has one significant difference from DNA and digital computers: an ability to grow new ‘hardware’ in the form of new links between neurons and the sensory organs monitoring the environment. The psychologist and neuroscientist Donald Hebb put this very succinctly: ‘neurons that fire together, wire together’.

By the time it is born, a child has grown some two billions neurons. Half of these connect up all the organs throughout the body, with the other half concentrated in the brain. It has been known for some time that a mature adult brain has grown some trillion additional neural connections.

Recent research counting neuron nuclei suggests that a mature brain has some eighty billion neurons. In effect, the brain continuously grows its own network hardware: one estimate suggests we grow between a thousand and a million new neural links or structures every second.

How does this happen? When a stimulus from a sensory organ is received (as an electrochemical pulse travelling up a dendrite), it transmits pulses across the network in an attempt to trigger a pathway formed by an earlier stimulus of the same experience.

If it finds an existing path (a ‘yes’), the neuron activates (and strengthens: learns) the same response used by the earlier experience. If it doesn’t (a ‘no’), it triggers the growth of a new link (a ‘conditional jump’), laying the new experience into the brain’s memory stores. Eric Kandel won his Nobel Prize for demonstrating this basic learning process in the very primitive brain of the snail Aplysia.

Interestingly, this choice (‘yes’, or a conditional jump) mirrors the operation of the computing machine that Turing envisioned. The result, though, is a massive relational database of trillions of connections linking up related experiences. Very quickly, it has so much information stored in its neurons that some sort of hierarchy of focus has to develop; this allows the organism to respond to imminent danger as fast as possible, while, at other times allowing the system to pause, reflect and develop more efficient responses.

There is much left to discover, but there is good reason for optimism in the convergence of the research into biological, computer-based and neurological information processing. The more we learn to develop our computers, the more this can inform our understanding of biogenetics and cognitive neuroscience.

Learning more about the way DNA is able to grow the hardware of our bodies, in turn, helps us to learn more about how the brain grows all the neural networks and structures that enable us to learn, to think and to be creative. The feedback loop goes on: the more we learn about both DNA and the brain, the better equipped we will be to conceive and design new generations of computers.

In summary, semiconductors, the nucleotides and neurons all create incredibly complex structures out of very simple basic units capable of being switched ‘on’ or ‘off’. We know that the elegant simplicity of this architecture can create computers capable of performing open heart surgery and putting Man on the moon.

The same basic architecture allows DNA to specify the creation of new life. The human brain’s ability to create complexity from simple structures enables the creation of minds capable of designing those computers and understanding DNA.
The revolution

So we have, indeed, given birth to a revolution, just as those pioneers sensed fifty years ago. This article has concentrated on science but the reach of computing is wider still.

In a paper for the Bank of England’s One Bank Research Agenda entitled ‘The Economic Consequences of the Computer Revolution’, the Real Time Club argues the economic case that the computing revolution will match and exceed the wealth creation of the industrial revolution but, in parallel, it is beginning to reverse much of the work and employment environment that the developed nations have enjoyed for the last three hundred years.

As we probe further into the links between these systems, we can be confident that their remaining secrets will be uncovered, and harnessed for specific applications. There are many medical, technological, economic and humanitarian reasons to pursue this convergence; it may be the key to massive improvements in healthcare, manufacturing through artificial intelligence and innovations that enable us to overcome challenges presented by issues such as climate change.

We have uncovered a route to a better world, and must now encourage philosophers, physicists, computer scientists, psychologists and biologists to grab the opportunity with both hands.

**This chapter from Professor Dennett’s Intuition Pumps and other Tools for Thinking (Allen Lane, 2014) is being excerpted for distribution.

This chapter shows, with wonderful clarity, how a box of integrated circuits can be programmed to beat a chess master and thereby uses our knowledge of computing to help demystify the apparent complexities of the brain. It is probably the best description of the fundamental architecture of software ever written. The reprint is available through Amazon, either for Kindle or as print on demand hardcopy.

‘Convergence’, by Daniel Dennett, published by the Brain Mind Forum can also be downloaded from http://BrainMindForum.org which also has a video of Professor Dennett’s lecture.

Real Time Club website

Charles Ross' obituary is printed in ITNOW, Volume 61, Issue 1, Spring 2019, Page 28.