Generative AI and the mystery of consciousness

The rapid advances in AI and large language models (LLMs) are raising questions about sentience and consciousness we can’t yet answer.

Most of us have now had an unsettling experience with the new wave of AI. We’re using ChatGPT or Google’s Bard and the answers seem so, well, human, that we find ourselves wondering if the LLM behind it is, well, conscious.

This answer is probably no because, impressive as they are, these large language models are, as they were beautifully described, basically just “spicy auto-complete”. They’re just very good at predicting what word should come next. And that creates a reasonable simulacrum of sentience and therefore consciousness. Except…

Well, we don’t really know what consciousness is. During the pandemic, many of us discovered the immune system is fiendishly complex and far more sophisticated than the simple models we were taught in school would suggest. So, too, is consciousness still hard to explain.

The mystery of consciousness

We don’t truly know what it is yet. Some of the most thorny questions about the existence of consciousness and free will are, as yet, unresolved by neuroscience. Here’s neuroscience professor and NEXT23 speaker Anil Seth talking about free will:

When I experience ‘freely willing’ an action I am in some sense experiencing my self as the cause of that action. Perhaps more than any other kind of experience, experiences of volition make us feel that there is an immaterial conscious ‘self’ pulling strings in the material world. This is how things seem.

This, he argues, isn’t actually proof of free will, just that we have a perception of voluntary choice.

More precisely, that they are self-related perceptions associated with voluntary actions. Like all perceptions – whether self-related or world-related – experiences of volition are constructed according to the principles of Bayesian best guessing, and they play important – likely essential – roles in guiding what we do.

Bayesian best guessing sounds awfully like what these LLMs do.

Could we even tell if AI became conscious?

And, in fact, that’s the heart of the dilemma we face when we explore the meaning of this new technology. If we don’t truly know what consciousness is, can we ever be sure if the AIs we create are actually conscious? This is not a trivial, abstract problem. If we create systems that are sentient and conscious, we have to treat them differently. Seth again:

The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

Perhaps, reassuringly, Seth himself leans towards a model of consciousness that suggests that AI is unlikely to become conscious in the near term:

My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living.

No biology, no inherent drive towards protecting one’s existence and spreading one’s genes. No consciousness. But that, of course, assumes that there can only be one form of consciousness or that it arises in a singular way. And that might not be the case.

The path to solving the consciousness riddle

To be able to tell if machines have become conscious, we need to understand what consciousness is. And we just don’t. Some fraction of the money now being invested into AI research also needs to find its way into consciousness research, if we want to avert some of the worst possible outcomes of this emerging technology:

Without a principled and experimentally verified understanding of how consciousness happens, we’ll be unable to say for sure when a machine has—or doesn’t have—it. In this foggy situation, artificial consciousness may even arise accidentally, perhaps as a byproduct of some other functionality the tech industry installs in the next generation of their algorithms.

It is perhaps reassuring that we have actually started thinking about this relatively early in the process. Digital has upended our societies in unexpected and unpredictable ways over the past quarter of a century, and few took the time to stop and think at the start. Those were the days of the digital utopians, and their radical visions of better futures through technology. That didn’t play out as planned.

Is AI a saviour or a destroyer?

This time around, the voices of caution are louder, and more informed. Seth himself was a signatory to a letter calling for more consciousness research. There have been other letters calling for a pause on AI development and, most recently, warning of the risk of human extinction.

All of a sudden, AI is elevated to a technology on par with nuclear fission. Splitting the atom has opened a door to one form of low-carbon energy, through nuclear power, but also to an existential risk to our species. Nuclear war could still wipe us all out. The war in Ukraine has been an uncomfortable reminder of that.

With AI, we are building a digital replica of a physical and biological system we don’t yet understand. If we’re to walk the narrow path towards this emergent tech being a boon rather than a bane, we need to better understand what it is we’re recreating. It could reassure us, by proving that machine consciousness isn’t possible. Or it could stop us from making a terrible, Icarus-like mistake.

Photo by Google DeepMind on Unsplash