VR has made us blind to the true potential of spatial computing

We’ve been brainwashed by the movies of the 80s and 90s to think that spatial computing will be delivered via glasses. It’s time to open our eyes to this technology’s true potential.

The traditional view of spatial computing is a blind alley we’ve spent too long charging down.

There. I’ve said it.

Virtual Reality has been “just around the corner” for as long as I remember. Certainly out seemed the next big thing when I was a child. Tron was released in 1982 — I wasn’t even a teenager yet — and The Lawnmower Man a decade later – and in the decades since, we’ve just about got to the point that we can have that level of immersive experience – and nobody cares.

VR is a consumer product right now, you can walk into a shop and buy one – but there’s a pretty big chance you haven’t done so. Most people haven’t. This has been a consumer technology for years – and it’s failing to accelerate. Sales last year were, well, terrible. Things are looking a little better this year, but even the most optimistic predictions put worldwide sales at 7.6m units — and that bundles in Augmented Reality units. That’s way less than the iPhone alone sold in its first full year of sales – and it wasn’t even worldwide then. The VR market is years old.

This fourth paradigm of computing doesn’t seem to be taking hold.

The spectacular misconception about AR

How about AR? Well, many of us remember Google Glass. We had a meetup of Glass users back when NEXT was in Berlin (in 2013 – was that really six years ago…?), and I even got to try a pair on at the event. And where are they now? A niche business application.

If VR is a bust – will AR be one, too, then? No. Because AR can be way more than just glasses. AR is really what will be at the heart of spatial computing. But it won’t be AR glasses. I promise you that. Even if Apple and other companies release AR glasses in the next few years, they will never reach the ubiquity that mobile have. The only people who might think that people will voluntarily become glasses wearers are those who haven’t spent a lifetime wearing them. I’ve been short-sighted since I was born. I’ve been wearing glasses since I was three. One of the greatest moments in my life was the day I walked into school as a 17-year-old, with my contact lenses in for the first time.

Glasses are a pain. They get dirty. They slip down your nose. They’re incompatible with a whole range of activities, including most sports and swimming. The utility of AR glasses would have to be vast to compensate for the inconvenience – and I just don’t see it.

That’s the lesson of Google’s creation and the “Glasshole” backlash that followed. AR glasses are always going to be a fringe activity – but there are plenty of other ways that spatially-aware devices can serve us.

The age of space-aware technology

The visual form of AR we’re used to is just one manifestation of spatial computing, which can do so much more than that. It’s all connecting the digital to the physical in a way that makes it more useful. If you use a smart home kit, you’re using a basic form of spatial computing, because, in a simple way, the smart home software is aware of the relationship between devices and space. And the smart home – by definition – will not have a single, killer device.

In all likelihood, spatial computing isn’t going to be defined by a singular device, like previous eras have been defined by the mainframe, the PC or the mobile. Spatial computing is going to be delivered by a whole range of devices, which understand the physical world around them, and can adapt their behaviour based on that.

Here’s something people just aren’t discussing: Apple already has a primitive spatial computing device, but nobody thinks of it that way. It’s called the HomePod.

Why is that a spatial computing device? When you set up a HomePod, it uses sound to map out the room around it, and then modifies its behaviour to take advantage of that space to bounce sound off, shaping the audio to maximum effect. When you speak to it, it assesses where you are in the room, and concentrates on the microphones pointing that way.

Let me emphasise that: the HomePod knows what the room around it looks like, and it continually remaps it, as that space changes. It understands the geometry of your room like no other device. The interesting thing about the HomePod is not that it is an over-priced entrant in the “smart speaker” market, but it’s a pioneering spatial computing device. An early one, sure. A primitive one, absolutely. But it’s there.

Let’s bounce to the other end of the scale – to another much-heralded technology that’s still just around the corner (but a bit closer than you might think, as Tesla owners might point out…): autonomous driving.

This is where spatial computing is vital. Autonomous driving can’t happen without it. The car itself needs to be aware of its surrounding physical environment – and make decisions based on that. That gets easier if the cars are aware of each other – and communicating between themselves, based on their physical proximity. Even better is if the road network itself is “aware” of the traffic on it – and can communicate that to the cars.

As we discussed last week, the technology is falling into place to make real-time computation and analysis of these sorts of environmental factors feasible and useful. And eventually, it’ll find its way into consumer tech. As our smart watches and our smart headphones carry more sensors and computing power, and become more aware of the spaces we’re moving through, we’ll finally start to kill those 80s and 90s visions of digital places being “other” than our physical reality.

Spatial computing will — eventually — mash together the physical and digital worlds inextricably. The Parallelwelten will collapse into a singularity – and make our lives all the better for doing so.

Just don’t expect it to be via a pair of bloomin’ glasses, OK?


Photo by Przemyslaw Marczynski on Unsplash. Last updated on 13 June 2023.