The second in a series of four articles on the implications of the convergence of computing, biogenetics and cognitive neuroscience by Charles Ross and Max Jamilly looks at communication between the brain and computers and how this can expand human abilities.

Computers have found their way into nearly every corner of our lives, but in the last few years we have begun to realise they can do much more than just process information. They are beginning to help us directly and indirectly understand how other systems operate: how the brain learns, remembers and recalls information; how DNA is the blueprint for human beings. The work being done in this field was outlined in the Autumn 2015 edition of ITNOW.

In the first article of the series, we explored what we are beginning to learn from comparing and contrasting the brain and computers, and the benefits both communities can gain from studying the basic architecture of each other’s systems.

However, computing can offer us an even greater prize. Professor Daniel Dennett, arguably the world’s leading philosopher of cognitive neuroscience, has sparked an interdisciplinary debate on the novel potential of computers: ‘How can we communicate directly with our computers as partners to extend everyone’s personal ability to learn, understand, master complex topics and even learn to be more intelligent?’

There has been a growing trend in the media to disparage computers, depicting them and ‘artificial intelligence’ as an evil force that could take over the world, which some people call a ‘singularity’, or point of no return. A far more positive approach would be to explore how we can work better together. The possibilities are extremely exciting.

Unlimited opportunities

Alan Turing argued in his 1936 paper, ‘Computable Numbers’, that if we could work out the formula to solve any problem then we could write a program which would enable a general purpose electronic device to execute that algorithm and so solve the problem. As early as 1956, a group at Dartmouth College in the USA argued that ‘all we needed was to define intelligence and we could build intelligent computers’. This statement is true.

However, the problem is in ‘defining intelligence’, which has turned out to be far more complicated than we expected. In his recent magisterial book on superintelligence Nick Bostrom quotes John McCarthy, one of the original Dartmouth team: ‘as soon as a program works, whether it is beating a chess master or air traffic control, it is no longer seen as intelligent’, which is always something more, always something still out of reach.

We are on the cusp of having the entire body of human knowledge, art culture, commerce, instructions and emotions instantly available on our smartphones. However, having access to information, in print or online, is not the same as learning and understanding that information - and it is for this understanding that we still need the human brain.

Computing is also fundamental to scientific research in two different ways. First as analytical tools. For instance, DNA sequencing would not be possible without computers. Secondly, everyone can now publish their work, and we use computers as research engines to find these papers.

Dramatic as these achievements are, we are only using our most advanced computers as we have used all our inventions down the centuries - as tools. They do not improve our brains or memories. However, the potential is there to do just this and integrate them into our neural systems. But how? Let us look at this from the perspective of evolution.

The two ‘great game changers’ to solving problems in our distant past were to start using tools, and the emergence of language. Anthropologists are divided on whether the evolution of opposable thumbs so that we could grip, caused, or was the result of using the earliest tools - sticks and flints. However, there is little doubt that language made a major impact on the evolution of the brain.

Providing a communication system was of immense value, but so also was the need to evolve networks of neural systems that could accommodate the massive volumes and complexities of words, speech and language. Then there was the ability to develop and minutely control the muscles of the lungs, throat, tongue and mouth to speak, and the hearing system to recognise sounds. Writing brought the skills to manipulate the fingers to a degree of accuracy seen in no other species. Nevertheless, in spite of everything we have learnt in modern times and all the technology we have developed our fundamental intelligence has probably not advanced. Until now.

Now we have two more ‘great game changers’: The ability to edit DNA and design computer systems. We now have the potential to grow better brains.

Access to the world’s knowledge

Humans have been amassing and disseminating information since the beginning of recorded history, from ancient wall paintings and the great Greek library of Alexandria, then encyclopaedias and dictionaries through to Wikipedia and scientific literature. The challenge lies in turning all this information into knowledge.

Learning

In the previous article we explored the central function of every cognitive sentient brain: the ability to store and recall information; to create memory. We are reasonably sure that we know how the brain grows the trillions of new links and structures to be found in a mature brain, and the process is quite automatic. Every time we use these links, we cross reference them to other concurrent activities, continuously strengthening them, integrating them and incorporating them into our neural networks.

The more cross references and links we grow, the stronger and more sophisticated is our understanding of the implications of that information, and there are more access routes to recall it; thus we gradually convert and expand the meaning of that information and so begin to convert it into knowledge.

Because memory is the foundation of all learning, nearly every generation has tried to invent ways of helping us learn to remember. The earliest known method, the system of loci - remembering a string of facts by their position in a visual setting - dates from 500BC, when Simonides recalled the names of all his guests based on where they were sitting round a table. Hundreds of proprietary systems have been promoted over the centuries.

Curiously, these systems often work very well for their inventors and their disciples but for few others. Self-confidence and belief seem to play a part. A partial list is at http://bit.do/bmfappendix_1

The volume of information is growing exponentially. Over three hundred people are employed to keep the Oxford English Dictionary up-to-date - and that is just new words! Yet no one has ever reported that their brain is ‘full’. Conversely it seems that the more we learn, the more we can learn. It seems plausible this is largely because all learning is incremental.

Children learn words and the associated sensations, then link words together endlessly building ever more complex patterns. Phrases build into concepts, where the whole is greater than the sum of the constituent parts. Throughout life we build outlines of new theories like frameworks and scaffolding into which it becomes increasingly easier to add ever more details, increasing the sophistication of existing memory structures. Learning something completely new is always much more difficult.

This has all happened before and we can learn from the experience. Writing made it possible for us to download our thoughts, extending our sentient capabilities, and providing a way of transferring knowledge from one to many, and provide a storage medium to pass information from each generation to the next. This worried Socrates as he feared this would downgrade human memory. Fortunately, the opposite was true, as text and books freed up the brain to push out the boundaries and frontiers.

We can now follow the same strategy but on a far more comprehensive scale. We can download relevant information from all sources, and build our own annotated personal files and databases of any subject we are particularly interested in. We can begin to use our neural brain as a form of index to our personalised silicon databases, but the intriguing target is to cut out the intermediary.

In due course we may be able to create prosthetic memories so that we can directly interface them together. That is what the next articles in this series explore.

Structure of information

We can learn something else that is very useful from our computers. Every digital datafile can be interpreted as a program, or facts, or algorithms, or pictures, or music, or whatever we chose. To the outside observer all files appear identical.

Cognitive neuroscientists have tended to divide memories into experiences, ‘episodic memories’, and activities, ‘procedural memories’. There is a strong argument that the neural structure of both are similar. More interestingly, it also seems likely that at the neural level, they are also indistinguishable from frameworks of information, and the representation of pictures and music. When a new stream of signals arrives, the brain-mind has no way of knowing what this information is or what it might become. An experience? An activity? New words? Significant or irrelevant? The brain can only process them in the same way.

The word ‘clusters’ is currently popular to describe the neural structures of information. To describe procedures, Donald Hebb proposed ‘engrams’, and Arthur Koestler suggested ‘holons’. To represent groups of experiences, Daniel Bor contributes ‘chunking’. To describe concepts, Richard Dawkins promotes ‘memes’. The full list is at http://bit.do/bmfappendix_2.

The one thing they have in common is that these are all bioprograms - mental subroutines, neural modules - perhaps we should call them ‘neurules’. Both neurules and clusters develop the same way, as usage builds them into hierarchies of ever greater complexity and content.

Let us look more carefully at the acquisition of knowledge

There is another example that demonstrates learning and throws light on the controversial debate about the relationship between intelligence and thinking. Many of us recall learning to balance a bicycle (or swim, or play a musical instrument, or drive, or learn the multiplication tables, understand relativity....). Initially we are all over the place, then everything fits into place and we ride away (or swim..., or we wonder how we found quantum mechanics difficult).

It all now seems effortlessly easy: as though on ‘autopilot’. We have successfully grown the neural networks that automatically store the knowledge, the algorithms, or execute the task. What we began by thinking was an impossible task now seems almost easy. This is strikingly similar to John McCarthy’s lament that ‘as soon as a computer program works… it is no longer seen as intelligent’. Can we learn something useful from this remarkable similarity between the way the brain and computers learn and implement new skills and knowledge?

People gradually add to their library of skills and knowledge. Faced with a problem the autonomic part of the system executes the relevant neural network in ‘background mode’ and we can consciously think of something else. We recognise this as a major aspect of intelligence, and can track its evolution back before mammals. Animals that responded quickest with the best solution available survived.

There are now computer programs that beat humans in dozens of games and puzzles that seemingly require ‘intelligence’. In commerce and industry, successful applications of computer intelligence are legion. However, if a particular program is not uploaded to a computer it can do nothing, so the conclusion is that while computers can carry out ‘intelligent’ programs they do not have any ‘general’ intelligence.

While the brain has the autonomic system, computers have their operating systems. Modern sophisticated operating systems already have many of the attributes of general intelligence, selecting the best solution available.

What the brain has, and computers do not, is the ability to create new ideas and concepts, and invent new solutions ab initio. That is what we call ‘thinking’, which is very different from intelligence. As Leonardo da Vinci remarked, ‘Everything we know was invented in a human brain.’ We are a long way from augmenting, let alone replacing, that ability with our programs.

Thus our strategy must be to design ways of making our known information available to our intelligent brains as directly as possible, and free up our sentient brains to the best of our ability to concentrate on what we can do best - and what no other system in the world can do: speculate, originate, invent, imagine, predict, conceive; or, in a word, think.

Education

The most dramatic developments are coming in education. A lot of work is going into designing systems that go further than list information but actually teach the meaning, implications and significance of information: massive open online courses, or ‘MOOCs’. Algorithms are being developed to enable online students to ask questions and receive answers from the presenter as though they were both in each other’s presence.

We will increasingly be able to fit teaching styles to individuals, rather than the one-size-fits-all classroom, familiar to pupils for the last two millennia. Thus the transfer of knowledge from brain to brain will change out of all recognition. It is all too obvious that many adults are round pegs in square holes, and far too much potential talent is wasted as a result of the limitations of our historical methods of selection and education.

Monitoring the brain’s learning mode

Researchers at Tufts University in Massachusetts are developing a completely different approach to learning. By fixing a number of inconspicuous electrodes to the head they can monitor brain activity using transcranial direct current stimulation. The aim is to build up profiles of neural activity as the brain-mind carries out a variety of different tasks.

They report progress in identifying when an individual’s brain is working hard on a project, like learning to play a piano piece, and when the brain is getting tired. They hope to develop algorithms that will enable them to ‘tune the brain’, varying the input to maximise working capability and minimise tiredness, optimising learning capacity for the various different tasks in hand.

It would a great breakthrough if this technology was able to identify different brain types, and different skill potential - identifying talent at the earliest opportunity across whole communities.

Ideas that are science fiction to one generation become commonplace to the next. Technology tends to be cumulative. Once a small breakthrough occurs, the massed ranks of sceptics soon jump on the bandwagon.

What we can say for certain is that, rather than seeming to compete with robots, it is far better for us to work together with computers. The long age of using our inventions as tools is about to be superseded, as we increasingly design our inventions as partners, working in harmony together to multiply each other’s potential.

The next two articles in this series will explore whether the future lies with symbiosis, using interfaces like language, or by direct connections - prosthetics. How we link our powerful human neural information processors to our quickly developing computer processors will determine how quickly they can build each other up and expand our overall capabilities.

Max Jamilly is a PhD candidate on the BBSRC/EPSRC Synthetic Biology Centre for Doctoral Training at the University of Oxford. He holds a BA in Biological Natural Sciences and an MPhil in Biotechnology Enterprise from the University of Cambridge. His interests lie in the interface between engineering and biology, and in using biology to build new tools for computation, industry and therapeutics. Max has researched epigenetics in Cambridge, UK and gene assembly in Cambridge, US. You can follow his writing and research at max.jamilly.com and twitter.com/UKsynbio

Charles Ross has spent fifty eight years in both the entrepreneurial and research aspects of computing and has been associated with four world firsts in software design. He was elected a fellow of the BCS for his early work on software diagnostics and an Honorary Fellow for jointly launching Computing with Haymarket Press, and its subsequent sale. He assisted with the negotiations with the Oxford University Press (publishers of ITNOW), then managed the Quantum Computing in Europe Pathfinder Project- the only contract the BCS has obtained with the European Commission. He is a founder member of the Real Time Club and Chairman of the Brain Mind Forum. He is the co-author with Shirley Redpath of Biological Systems of the Brain.

Charles Ross' obituary is printed in ITNOW, Volume 61, Issue 1, Spring 2019, Page 28.