Augmented realities for lizard brains

NEXT23 gave us a roadmap to a blended physical and digital world but warned that our brains might not be ready for these new realities.

Innovation is faster than evolution, to an almost unimaginable degree. Many of us have an instinctive fear of snakes or spiders. Why? These were inherited from our distant ancestors for whom they were a deadly threat. Yet, our response to guns is still acquired and emotional, rather than instinctive. We have to learn to fear them because we’ve not yet evolved to be wary of them.

The chasm between our evolution and our innovation is about to get wider. We’re about to start shifting our perceptions of reality itself. Until now, our reality has largely been composed of objects we can touch, a physical environment we can move through. Sure, we’ve had to adjust to the idea that images on a screen are just that: images, not the real object represented. There is, though, little evidence that people thought that the images were real, despite one persistent urban myth:

Since there are no surviving contemporary accounts of the audience reaction to those 1896 showings, there is no concrete proof that audiences ever went scurrying for the back of theater as the train pulled in on screen, and Loiperdinger thinks that such a reaction is unlikely.

The brain is adaptable, even to entirely new things. But it doesn’t work quite the way many of us assume — and that’s why its next challenge will be a dramatic one.

Personal Realities

Rarely has NEXT opened with such a brain-bending session as that of Anil Seth, who proved to us that our brains don’t operate the way we think they do. We don’t interact as much with what is there, as what our brain predicts will be there. Sometimes we completely fail to notice changes in our environment — or only catch on after a period of adjustment:

The brain is a prediction machine: all we perceive is the brain’s best guess as to the source of the inputs it is receiving. There’s no light or sound in the brain, just electrical signals. To make sense of these signals, the brain has to make some informed guesswork as to what caused these signals. This is what we experience. The brain doesn’t read out the world, it creates it.

We don’t believe things when we see them, as Seth put it, but rather we see them when we believe them. What then, if we change the way we perceive the environment? What if the lines between the digital and the physical blur into one? If we can “see” and interact with digital objects somewhat as we do physical ones — but not identically — how will the brain handle that distinction?

Alternative Realities

This is a question we haven’t had to wrestle with before. For years, our conception of the Metaverse has stuck resolutely in the 1980s version of a virtual reality. It’s one we “jack into” and experience as an alternative reality, separate from the physical world we’ll return to. From Tron to The Lawnmower Man to The Matrix, this has been a consistent theme in science fiction for over half a century.

The metaverse, though, remains stuck long before the mainstream adoption stage. A combination of hardware problems, a lack of compelling use cases and, perhaps, a lack of playfulness prevent it from breaking through:

What Silicon Valley has never been able to crack is creating compelling games on the order of Elden Ring, Baldur’s Gate, or Pikmin, games so beloved that their own fans will take out ads in Times Square to promote them. Without this familiarity with play and fun, Silicon Valley’s effort to assemble what was supposed to be the largest game ever created was doomed to collapse. Instead, as one writer said, about Decentraland: “It is … empty, boring, and not very different from just another quickly made online game a teenager would toss away.”

But as and when those problems are solved, we know already how to navigate reality. We should be able to figure out how to do that in alternative ones.

Augmented Realities

But augmented reality raises a whole new set of questions. How do we move through real physical spaces where some objects we see can be touched, and others can’t? How do we share a space when some people within it can see objects that others can’t?

That’s the very real possibility that products like Apple’s Vision Pro offer us: a world where bits and atoms intermingle freely. This was the world that David Mattin talked about in his talk at NEXT23. Our digital and physical worlds coming together in a more profound way than they have so far.

Apple’s expensive — but impressive — baby step in this direction will test our brains in new ways. Many of us have had the experience of touching our laptop screens, expecting to scroll them because we’ve been trained into that by our phones and tablets. Will we soon be trying to “touch” virtual objects that we can see, hear and interact with, but not touch or smell? How will our brain cope with objects that exist for some of our senses, but not the others?

Pleasant Realities

Today’s technology tends to take us away from physical presence and into a vital space. We’ve all had the experience of being in somebody’s company, but realising that their attention was elsewhere, sucked away to someone else, through the small back portal of the phone screen. The metaverse just takes us that step further along that conceptual path.

Tomorrow’s technology confronts us with sights and sounds that the predictive models in our brains aren’t yet accustomed to. And that will be a profound challenge for the next wave of product builders: how do we make blended experiences that feel pleasant and harmonious to our brains, rather than jarring and uncomfortable?

None of us yet know for sure if Apple has cracked that: and even if they have, the people who build apps on their platforms will have to match or exceed that. So will those people building experiences and services on the competitor platforms that will inevitably emerge.

The technology is moving faster than our brains, so making this next leap in tech will involve making sure we bring our brains along for the ride.