Can AIs have consciousness?

To explore the idea of AI consciousness, we first have to understand the human version of it. And, as Anil Seth and Thomas Mueller explore, that’s more complicated than you think.

In the third episode of the sixth season of the NEXT–Show, Anil Seth (Cognitive Neuroscientist at the University of Sussex) and Thomas Mueller (Europe Design Lead Accenture Song) discuss the brain as a prediction machine, our consciousness and how that compares to the new wave of AIs.


Watch the complete episode


Could the new wave of generative AIs be conscious? To answer that difficult question, we need to understand what we mean by consciousness — and that’s something Anil Seth has been working on for a long time now.

“We’re not just objects,” he says. “We have experience. We open our eyes in the morning and experience the world around us, the taste of coffee. But how does mere stuff, our brains, generate conscious experiences? Why don’t we all just live in the subjective dark, like complex meat robots?”

Trying to answer that has led Seth to assemble a cross-disciplinary team to try to understand how meat produces consciousness. “I love the cross-function expertise you bring together to explore the mystery of me,” says Thomas Mueller, who sees a parallel with some design work teams. After all, good designers need to understand different facets of human perception, too.

Can digital chance consciousness

Fundamentally, though, Mueller would like to know if the last three decades of digital innovation have changed our consciousness at all. In many ways, consciousness hasn’t really changed much, suggests Seth. But human beings are flexible, adaptable creatures, and we live in environments very different from the ones our parents or our grandparents spent time in.

Seth thinks of the brain as a prediction machine, making as accurate predictions as it can about the world around us based on past experiences and the current input it’s getting from our senses. And that applies as much to our social environment and distances, which have changed dramatically over the past 100 years, as it does to colour, which hasn’t. We haven’t had an opportunity yet to adjust the way our brain works to manage this.

One clear example of a major change we haven’t yet fully adapted to is the rise of the mobile phone. “An increasing part of my mind is in my phone,” Seth adds. “To the extent that when it’s not there, I feel like half my mind has been amputated. Even 20 years ago, I didn’t feel that way.”

Design, he suggests, is all about what we’re sensitive to in our daily experience, about how our attention works. And that has changed dramatically.

A pre-social network brain

“Designers need to be aware of consciousness, how perception works, to provide a useful service,” says Mueller, agreeing that they need to partner with people like Seth. But has the internet itself, not just the phone, changed the way we experience ourselves and our thoughts?

Seth says that one of the most interesting questions is how our brains have dealt with the social networks we’ve developed online. We’ve evolved in social groups that were relatively small and relatively stable — and local. That’s changed beyond recognition.

“Our social networks have grown enormously, but the bandwidth of each communication has got much smaller,” says Seth. “We communicate, but we don’t see their facial expressions, or feel their touch.”

“A lot of the problems we see with social media have to do with this mismatch between how our minds have been shaped to deal with social interactions and this vastly different social situation we find ourselves in now. There’s a big prediction error there that our prediction machines — our brains — are struggling to deal with.”

Why VR isn’t working

“So many of the signals, the cues we project outwards, are missing in the current form of social network,” agrees Mueller. “So, consciousness: has it widened or narrowed as a result?”

“It’s changed, and it keeps changing” replies Seth, refusing to go either way. He gives the example of virtual reality, which seems to be a technology that’s always just around the corner from being truly pervasive, but it’s never quite there. His team could give people richer experiences as part of their research if it was, but in its current state, it lacks the richness of the physical world.

“It misses the background texture that we don’t even notice, the sounds in the room, the tactile feeling of sitting in a chair, things off the edge of our perception, the smell: they’re all still missing. And they make much difference to the subjective reality of the world,” he says. It’s not just a design challenge, it’s a technological challenge to overcome.

“Should we, though?” asks Seth. “Or should we concentrate on living in the real world, and dealing with the problems in front of us, not shooting off into a virtual world?”

Mueller points out that the pandemic reminded us of the limits of the interaction we have when it’s all mediated through a rectangular screen for two years. “It limits my consciousness in a situation like this, a discussion,” he says. “The real world is here to stay.”

Is Generative AI a form of consciousness?

Can Generative AI catch up to the human prediction machine to the degree we can see it as conscious?

Ironically, Seth’s original PhD was in AI, but he thinks the modern use of the term misleads us. “We’re very anthropomorphic creatures,” he says. “We project the properties of human brains into objects that probably don’t have them. We tend to project qualities into LLMs — beliefs, understanding, consciousness — that they don’t have. They’re just algorithms whirring away. They don’t have conscious experiences.”

We’re also anthropocentric, Seth explains, which means that we focus on the qualities we think are special to being human, including language.

“But what really matters to consciousness is not language or intelligence, it’s raw feeling. It evolved in biology to help us stay alive. Pain, fear, disgust, joy: these depend heavily on our biology, the fact that we’re flesh and blood.”

AIs have none of this: they’re just predicting what should come next in any process. “So, yes, they’re a type of prediction machine, but the mistake we make is that we treat them as alternative minds. We give them the attribute of agency, that they don’t possess.”

AI as co-pilot?

One of Seth’s mentors, Daniel Dennett, has cautioned that with AI “we’re not building colleagues, we’re building tools”. They should complement our embodied, embedded minds.

“We’re building co-pilots?” suggests Mueller.

“Not exactly,” says Seth. “When you think about co-pilots, you think about alternative pilots, someone who can take over. We’re talking about something more like radar, that helps the pilot see further. AI extends the human mind and body, not replaces it.”

Hallucinations

So, what of AI hallucinations, asks Mueller. Are they analogous to human hallucinatory experiences?

“Our experiences of the here and now aren’t just readouts of what’s in the world around us, they’re internal predictions about the way the world is that’s calibrated by sensory data,” Seth says. “The brain’s predictions based on that data are not geared to accuracy but to survival.”

“Our perception is hallucinatory, but it’s very, very tightly controlled by what’s out there in the world,” says Seth. “What we generally think of as hallucinations, seeing something that’s not here, hearing voices, that can be thought of as uncontrolled hallucinations.”

The Dream Machine Project

Human hallucinations can be induced. In the late 1950s, an artist called Brion Gysin hung a bright light about a rotating turntable with a cylinder with varying size slits on it. This created a strobing light which, if the frequency was right, would create powerful visual hallucinations.

They’ve recreated this as both a touring physical version, and an online version, the Perception Census. The physical version brings together neuroscientists, philosophers, architects, designers and musicians to reinvent the dream machine, and make it a collective experience: a hallucinatory experience shared by 30 or 40 people. And then give them the space to reflect on the experiences. Everyone has different experiences — because we all see things slightly different.

The limits of LLMs

LLMs make things up rather than misinterpreting signals: they’re not built for understanding, but prediction. It’s just next token prediction. So, Seth prefers to call what happens when generative AIs go off the rails confabulation, not hallucination. It’s a term from human neuropsychology, and it’s what people do when they fill in the gaps of understanding with nonsense.

“We should be careful with the words we use because they can lead us astray by making assumptions about these systems that turn out to be entirely wrong.”


This post is based on the conversation between Anil Seth, cognitive neuroscientist at the University of Sussex, and Thomas Mueller, Europe design lead for Accenture Song, on the NEXT–Show in November 2023.