A biological computer that can play Pong

The human brain is a biological computer. Recent breakthroughs give us real, biological models of the brain – that can learn by themselves.

Artificial intelligence has been all the rage for the last 18 months. In essence, it’s about recreating human (or even creating superhuman) intelligence through machines. One of the downsides of this approach is the high cost of computing power and the energy demand AI creates. The human brain, by contrast, is an incredibly energy-efficient biological computer: it is thought to be able to perform an Exaflop – one quintillion (1018) operations per second – using only 20 watts of power.

Luckily, there have also been breakthroughs in synthetic biological computing recently. The DishBrain, developed by start-up Cortical Labs, has proven that brain cells living in a dish connected to multi-electrode arrays can learn to play Pong (the iconic 1970s video game). Brainoware, a brain organoid from stem cells, can process, learn, and remember information. It can even do some rudimentary voice recognition.

Both have solved the brain-computer interface puzzle the other way around: by growing living brain cells directly connected to computers – with the highest possible bandwidth. Neither animals nor humans are harmed in the process. However, the ethical implications of working with living biological cells still apply. These experiments give us real, biological models of the brain – something incredibly useful for neurological research.

How do they work? The DishBrain electrode array can stimulate the neurons and record their activity. Via this interface, they are connected to a Pong game. Whenever the neurons miss the Pong ball, they are punished by unpredictable signals – something they naturally strive to avoid. In the experiment, it took them five minutes to learn how to hit the ball. Andy Kitchen, the co-founder of Cortical Labs which was behind this project, will speak at NEXT24 in September.

The free energy principle

Now, what comes into play here is our understanding of the brain as a prediction machine:

all we perceive is the brain’s best guess as to the source of the inputs it is receiving. There’s no light or sound in the brain, just electrical signals. To make sense of these signals, the brain has to make some informed guesswork as to what caused these signals. This is what we experience. The brain doesn’t read out the world, it creates it. Our sensory inputs are used to calibrate our brain’s predictions of the world around us.

This calibration is governed by the free energy principle. The brain makes predictions based on internal models and updates them through sensory input to avoid surprises and uncertainty. Karl Friston, who introduced the free energy principle to explain embodied perception-action loops, assumes this to be the principle of all biological reactions – and that it applies to artificial intelligence as well.

So far, the history of biological computing has been limited to systems based on DNA or proteins. Employing actual neurons is a significant step forward.

With the DishBrain, we now have a working lab model of a small brain, or, at least, parts of a brain. This is incredibly useful not only for research, but also for applications like drug testing. In the words of Karl Friston:

“The translational potential of this work is truly exciting: it means we don’t have to worry about creating ‘digital twins’ to test therapeutic interventions. We now have, in principle, the ultimate biomimetic ‘sandbox’ in which to test the effects of drugs and genetic variants – a sandbox constituted by exactly the same computing (neuronal) elements found in your brain and mine.”

What’s more: it can also lead to real, biological computers. As Cortical Labs puts it:

There are many advantages to organic-digital intelligence. Lower power costs, more intuition, insight and creativity in our intelligences.

Organic neural networks are way more powerful than machine learning algorithms. Instead of building artificial general intelligence with loads of hardware, energy, and algorithms, the DishBrain approach utilises organic intelligence, shaped by millions of years of evolution.

How will computing change when we shift to the self-learning biological computer?

A lifelong learning biological computer

And how will our understanding of the brain change, now that we have a high-bandwidth interface, albeit not to a fully-fledged brain, but to living brain cells? The DishBrain has demonstrated how the free energy principle works. Quoting Karl Friston:

“Remarkably, the cultures learned how to make their world more predictable by acting upon it. This is remarkable because you cannot teach this kind of self-organisation; simply because — unlike a pet — these mini brains have no sense of reward and punishment,” he says.

A biological computer is nothing surprising per se, given that information processing (aka computing) is central to all forms of life. Compared to silicon chips, organic systems are at an advantage because of their energy efficiency, built-in self-learning capabilities, and ability to change and adapt. This creates new avenues for computing:

“This new capacity to teach cell cultures to perform a task in which they exhibit sentience – by controlling the paddle to return the ball via sensing – opens up new discovery possibilities which will have far-reaching consequences for technology, health, and society,” says Dr Adeel Razi, Director of Monash University’s Computational & Systems Neuroscience Laboratory.

“We know our brains have the evolutionary advantage of being tuned over hundreds of millions of years for survival. Now, it seems we have in our grasp where we can harness this incredibly powerful and cheap biological intelligence.”

This will also give us new ways of understanding how the brain works. Instead of using silicon-based information technology as a poor analogy for the brain, we can get a glimpse of the real thing, through a living model brain.

Photo by Gerd Altmann on Pixabay