What is intelligence? And can we recreate it?

Can we really have artificial intelligence when we don’t really understand what intelligence is in the first place? We need to reconceptualise intelligence before we can make the most of AI.

“What is artificial intelligence?” as a question presupposes that we know what “intelligence” is. That’s not necessarily a safe assumption. What we casually describe as “intelligence” is actually a complex mix of systems that we don’t actually thoroughly understand at a human level. That’s why we start qualifying the word with terms like “emotional intelligence” or “reasoning intelligence”.

Humans are complex mixes of hormonal, electrical and other systems, all wrapped up in a complex bipedal biological body. When you hug a friend or loved one, you are very literally hugging the system. Trying to identify and separate the “intelligence” part of the system from the rest is functionally impossible. We know it has residence in the brain, but we also know that other systems impact on it. Put some alcohol in your digestive system if you don’t believe me…

And so, making comparisons between our complex human brains and the relative simplicity of “artificial intelligence” systems is not just an inaccurate comparison. It also borders on harmful anthropomorphism.

Human intelligence is not the only form

One characteristic of our messy human synapses is that we tend to be good at seeing things in human-like terms. I got a new car earlier in the year, my first in over a decade. This proved unexpectedly traumatic for my daughters. They had anthropomorphised the old car to such a degree that it felt like saying goodbye to a family member to them.

People who watch animal webcams for fun are often guilty of this. They forget they’re watching wild animals, creatures driven far more by instinct than human-like rationality or emotion. And they get upset when, for example, birds watch a fledgling starve to death without intervening. They project human-like parental feeling on avian brains. Yet their feeding impulse is largely reactive to begging sounds and the sight of an open beak.

NEXT22 speaker James Bridle has been delving deep into these ideas. They’ve explored biological forms of intelligence that are very different from our own in their new book Ways of Being.

From bonobos shaping complex tools, jackdaws training us to forage for them, bees debating the direction of their swarms, or trees that talk to and nourish one another – or something far greater and more ineffable than these mere parlour tricks the non-human world seems suddenly alive with intelligence and agency. It’s a trick of the light of course: these other minds have always been here, all around us, but Western science and popular imagination, after centuries of inattention and denial, are only just starting to take them seriously. And taking them seriously requires us to re-evaluate not only our idea of intelligence, but our idea of the entire world. What would it mean to build artificial intelligences and other machines that were more like octopuses, more like fungi, or more like forests?

Artificial intelligence versus biological intelligence

And this is where we need to reevaluate how we look at and think about AI. It’s not like our brains. Not at all. It’s built differently, and works differently. And even when its output is something that looks human-generated, a very different system is at work.

As Jacob Browning and Yann Lecun explained in a Noēma piece about the limits of language-based models of AI:

The underlying problem isn’t the AI. The problem is the limited nature of language. Once we abandon old assumptions about the connection between thought and language, it is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans. In short, despite being among the most impressive AI systems on the planet, these AI systems will never be much like us.

At some level, they argue, what we are creating right now are things that can have the appearance of intelligence. What they don’t have is actual intelligence in a form we would recognise:

Just because a machine can talk about anything, that doesn’t mean it understands what it is talking about.

The complexity of cognitive systems

Language is, in fact, notorious imprecise, and we humans use other systems, including our ability to interpret facial expressions and body language, to understand context, and even to preach to pheromones to enhance that communication. A system that only emulates one aspect of that process — the language itself — is never going to be anything like a human brain.

When we use the word “intelligence”, we have a sort of vague, intuitive grasp of what we’re talking about. We do not have a clear, defined one. And so, we end up using it to describe different forms of effective systems:

A trivial chip can do basic arithmetic better than us and salamanders, and you can call that “simple intelligence” if you like, but you have to actually establish that “general” intelligence is a coherent analogue of that, and that as you turn some sort of “generality” knob, there’s a magic threshold past which the thing is general and capable of callously killing you in a very special way.

The author, Venkatesh Rao, also explored some philosophical ideas about replacing salamander neurones with silicon equivalents in an animal’s brain. It’s a fascinating, if challenging, read.

The path to synthetic biological intelligence?

We’ve explored the idea before that the systems we’re building now are “assistive intelligences”. Like the AI-based art generators that have grabbed attention in recent months, they assist us by creating things based on human prompts. Synthetic silicon-based intelligence is fundamentally different to our own.

But maybe that conversation will change over time. Can we create biological synthetic intelligence? As biotechnology gets more sophisticated, new ideas open up. And another of our speakers, Loretta Tioiela, is at the forefront of that, through her work with synthetic biology. The more she explores the topic, the more she shifts her thinking:

That mindset shift alone naturally brought me to the next step: the need for more computing in life science at the ecosystem level.

That was the genesis of the Synthetic Biology Open Computing Foundation — and a clear signal that the exploration of artificial intelligence is still just in its infancy.