Implications of convergence

April 2017

Neurologist examining scansThe fourth and final in a series of articles on the implications of the convergence of computing, biogenetics and cognitive neuroscience by Charles Ross and Max Jamilly looks at the new field of biological computing and swapping microprocessors for microbiology.

When there is a problem to be solved, two computers are better than one. In the past decade, impressive advances in computer hardware and programming have allowed us to harness the power of several processors at once. Thanks to parallel computing, we can solve problems more quickly and efficiently than ever before.

Yet in the world of biology, the tools for parallel computing have existed for millennia. Once again, nature beat us to it.

Parallel computing is the ability to execute multiple processes simultaneously. Traditional computers were programmed for serial computation, where each task could start only after the previous one was completed.

Modern computers, however, use multiple cores, sometimes in the same machine, sometimes distributed across the world, to divide a problem into smaller, independent tasks and work on them at the same time.

Predicting the weather, crunching scientific data, processing images and video, mining bitcoins - all these processes depend on parallel computing.

Biological systems and man-made computers are not the same, but they have converged at different levels of abstraction to perform parallel processing. From communities of microbes to the human brain, nature can teach us profound lessons about computer architecture and, in turn, modern computers are the starting-point for a whole new kind of biological supercomputer.

In the final article of this four-part series exploring the convergence of computers and biology, we explore the novel ways that silicon and cells can combine and glimpse a future where our brains and computers are intertwined.

Small but powerful: bacterial supercomputers

Biological computers will not replace electrical ones any time soon. Designing biological computers is complex. Redesigning them for every new application is harder still. Living systems are difficult to scale up and they aren’t currently integrated with our existing infrastructure.

But there are some areas where biological computers excel, where their unique properties enable them to solve problems with a speed and efficiency that may never be matched by electrical computers. In particular, biological computers may be perfect for solving combinatorial problems.

In the language of computer science, an NP-complete problem is one whose complexity grows more quickly than the problem itself. Since doubling the size of the problem results in much more than doubled complexity, finding the solution to an NP-complete problem can take a very long time. Many problems in this family can be reduced to similar problems and then solved by approximate or heuristic algorithms, but scientists are still in keen pursuit of an algorithm which solves these quickly.

A famous example of NP-completeness is the burnt pancake problem, which imagines a stack of pancakes, all of different sizes, each of which has one burnt side. You want to rearrange the pancakes so that they are in order of decreasing size from the bottom up, and all the burnt sides are facing down; you can only rearrange pancakes by sliding a spatula into the stack and flipping over the pancakes above it.

The problem is to calculate the smallest number of spatula-flips required to rearrange the stack. (You’ll be in good company next time you ponder upon a stack of pancakes: in 1979, Bill Gates published his first and only academic paper on a version of this problem.)

Tackling NP-complete problems like the burnt-pancake problem requires huge numbers of simultaneous calculations. Even modern parallel computers struggle with the demands on time, space and power consumption. Yet the equivalence of most NP-complete problems means that, with some creativity, they can be reprogrammed into biological systems ideally suited to massively parallel computation.

We saw in the previous article in this series that Leonard Adleman used the programmable interactions of DNA molecules to solve combinatorial problems. In 2009, a team of young researchers went to the next level: they solved the problem within living cells.

Cells may be the smallest unit of life but they are also units of processing power. Microscopic, self-sustaining, environmentally robust and continuously reproducing, cells hold real promise for combinatorial problems. The team led by Todd Eckdahl at the University of Missouri demonstrated this principle in the bacterium E. coli on another NP-complete problem called the Hamiltonian path problem, a relative of the famous traveling salesman problem which asks how to draw a path through all the points in a network, visiting each point only once.

They abstracted a very simple case of the problem and encoded it in DNA, then used bacterial machinery to flip sections of DNA within their genomes and thereby try all possible combinations of routes around the network. The bacteria flashed yellow when they had successfully solved the problem. A problem of a kind intractable to serial computers and very difficult even for parallel computers had been solved by simple bacteria.

Secrets of the brain

Building biological parallel computers in bacteria is difficult but, at the opposite end of the scale, natural parallel computers may already exist. There is much controversy about whether the human brain truly functions as a parallel computer. On the one hand, there is the impression that our brains simultaneously control the myriad parallel functions of conscious information processing and autonomous regulation.

Conversely, many of these processes occur in independent structures, without any exchange of information. Rather than a single algorithm being used to divide a problem into parallel tasks (like each engineered bacterium testing several Hamiltonian paths), the brain simply applies multiple algorithms to separate problems, all at the same time. If the latter is true, then to call the entire brain a parallel computer would be to mistake speed for efficiency.

It is likely, nonetheless, that certain information-processing centres within our exquisitely complex brains really do perform some kind of parallel processing. For example, there is evidence that visual information is collected in a variety of channels such as shape, colour and velocity which are processed and decoded in parallel before being integrated in the visual cortex.

Yet we know very little about the mechanisms of the brain and its underlying programming architecture. Neurons (nerve cells) are arranged in serial, and information passes from one neuron to the next much more slowly than between processors in a parallel electrical computer. Whereas distributed electrical processors have a clear, predefined role in solving the overall problem, the connections between neurons are more like a tangled bush than a well-pruned tree and their architecture is far harder to decipher.

Regardless of how accurate the brain-computer analogy may be, computer science has learnt a great deal from cognitive neuroscience. Humans are testament to the fabulous complexity of our own brains.

Some of the most successful recent advances in computer science and machine learning - computer vision, pattern recognition, autonomous decision making – are due to computer algorithms inspired by the way our brains seem to process information. These so-called neural networks are designed to mimic the brain’s structure, behaviour and ability to learn.

Neural networks are not alive. Unlike the bacteria which were programmed to do parallel processing, a neural network is no more than a computer code. The physical architecture of the silicon circuits that drive neural networks has nothing in common with the wiring of the brain.

But the design principles shared between neural networks and human brains have proved very useful for studying the brain itself. Artificial neural networks have the power to model parts of the brain. By comparing the predictions of our models with the reality of the brain, we can optimise the models and enhance our understanding of neuroscience.

Brainets: a connected future?

Just as we can use biology to bring life to computers, so we can use computers to digitise biology. Perhaps the final frontier is to combine the two. Recent research into brain-computer interfaces and neural signal processing hints at a future where some brain functions could be integrated within computer systems.

Connecting electrical devices directly to the brain is not easy. It has been four decades since researchers first succeeded in conditioning a rhesus monkey to move a robotic lever using neural activity. Today, our ability to decode and encode neural firing patterns has progressed enormously.

A variety of brain-computer interfaces has been developed for therapeutic applications, ranging from visual implants to motorised limbs. Currently these devices are invasive, requiring implants directly into the grey matter or at least inside the skull. While non-invasive alternatives exist, they do not have the same resolution or precision.

Nonetheless, it may be possible to combine a wireless implant with a skull-mounted unit. Using more exotic technologies, researchers have managed to modulate the memories of rodents by precisely stimulating their neurons with light.

With the help of computers, brains can communicate with other brains. A group from the University of Washington managed to read an electrical signal from one subject’s brain (the sender) and write this to another subject’s brain (the receiver) by magnetic stimulation.

The sender and receiver cooperated to play a computer game where the sender’s decision to ‘fire’ was remotely transmitted to the receiver, who pressed a button controlling the game. Still more incredible was the work of a joint Harvard-Korean team which translated a human volunteer’s thoughts to control the tail of a rat.

If computers and brains can be trained to speak the same language, then why not build a network - the ultimate parallel biological computer? A team at Duke University led by Miguel Nicolelis built what they claimed to be a brain-network, or brainet, in which the brains of four rats were united by two-way cortical connections. In the culmination of their experiments, the researchers encoded a problem equivalent to predicting thunderstorms in North Carolina and delivered it to the brainet.

Evidence suggested that the rats, stimulated and recorded while fully awake, could synchronise their neural activity and share the processing required to solve the problem - and the brainet performed at least as well as individual rats. This research remains controversial because it is unclear whether the rats were truly networked or whether social interaction may have played a role.

Whilst impressive, these technologies are a long way from truly networked brains: we still have only a rudimentary understanding of how sensory information is encoded, which in turn is far more simple than abstract thought. Nonetheless, rats that predict the weather, bacteria flipping pancakes and distributed computers are all different pieces of the same computational puzzle.

As our understanding of electrical and biological information processing becomes more and more detailed, the potential for convergence grows exponentially.

About the authors

Max Jamilly is a PhD candidate on the BBSRC/EPSRC Synthetic Biology Centre for Doctoral Training at the University of Oxford. He holds a BA in Biological Natural Sciences and an MPhil in Biotechnology Enterprise from the University of Cambridge. His interests lie in the interface between engineering and biology, and in using biology to build new tools for computation, industry and therapeutics. Max has researched epigenetics in Cambridge, UK and gene assembly in Cambridge, US. You can follow his writing and research at max.jamilly.com and twitter.com/UKsynbio

Charles Ross has spent fifty eight years in both the entrepreneurial and research aspects of computing and has been associated with four world firsts in software design. He was elected a fellow of the BCS for his early work on software diagnostics and an Honorary Fellow for jointly launching Computing with Haymarket Press, and its subsequent sale. He assisted with the negotiations with the Oxford University Press (publishers of ITNOW), then managed the Quantum Computing in Europe Pathfinder Project- the only contract the BCS has obtained with the European Commission. He is a founder member of the Real Time Club and Chairman of the Brain Mind Forum. He is the co-author, with Shirley Redpath, of Biological Systems of the Brain.

 

Image: iStock.com/Logoboom

There are no comments on this item

Leave Comment

Post a comment