We’re deep in hyper-prediction territory right now. We have an emergent new technology that appears transformative, and everyone is putting forth their vision of exactly how AI will change our world. And there’s a heck of a spectrum. At one end, there’s the more apocalyptic take. Not apocalyptic in the sense that AI will kill us all, but in the sense that there will be an economic transformation so dramatic that it upends the social order completely.
The most popular example of a big, viral, prediction of world-shaking impact doing the rounds right now is HyperWriteAI CEO Matt Shumer’s thread about AI on X, collected under the name Something Big is Happening:
This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.
This is a bleak, bleak prediction of the future. Despite presenting this vision in the normal breathless hype prose of the startup world, he’s describing a dystopia where a handful of people make a fortune, and the rest of us scrabble in the dirt for whatever economic scraps are left for us once AI has eaten all our jobs.
That is not a recipe for social stability. This is not even a recipe for a liminal existence. This is a portent of social collapse and revolution. A society of a few elites and a starving underclass is not one that tends to go well for the elites, historically speaking.
Break the AI prediction hype cycle
On the other end of the scale, some people are eager to burst this sort of dystopic hype, and predict less dramatic changes. Ed Zitron has put an annotated version of Shumer’s post on Dropbox, and some of his comments are not so much cutting as actively disembowelling:
Complete nonsense, and par for the course he has no citation. Matt loves to say stuff like this that sounds very big and scary, but has absolutely no evidence because he’s lying.
Shumer is, of course, the founder of an AI startup, and so has a vested interest in all this. He certainly sees himself as one of the AI-using elite. If there’s one thing that we should have learnt from the last 20 years, it’s that tech founders are notoriously bad at predicting the future. Take Mark Zuckerberg. Do you remember the moment that he renamed his company Meta, less than five years ago? Why did he do it? Because he was all in on the metaverse, the future of human technology – or, at least, 2021’s version of it.
How’s that going?
Meta plans to cut around 10 percent of the employees in its Reality Labs division who work on products including the metaverse, according to three people with knowledge of the discussions, as the company shifts priorities to build next-generation artificial intelligence.
Or, so reported The New York Times last month. And this is despite the relative success of the Meta Glasses. Zuck called it wrong, and now he’s busy pivoting his company to the AI future, while carrying the name of a retrofuture.
The challenge of predicting the future
We’re all prone to this misprediction tendency. We ran serious sessions on self-driving cars back at NEXT14 when the conference was still in Berlin. I think it’s safe to say that we were a bit early on that one. 12 years on, there are some genuinely self-driving vehicles out there, but not at any scale. And even then, they might be less autonomous than they look:
On overseas labor, Sen. Ed Markey of Massachusetts called Waymo’s use of remote human operators in countries outside the US “completely unacceptable.”
Peña said that while some operators are located in the US, others work abroad, including in the Philippines. These operators typically step in when robotaxis encounter unusual situations.
We were right to predict that autonomous driving technology was coming. We were wrong on the timescale.
It’s very challenging to predict both the speed and the impact of technology. Nearly 20 years ago, people were mocking social media for being “photos of people’s lunch”. Now, multiple governments are seeking to ban it, because of the risk it poses to children. That was society operating at the other end of the scale – mocking and dismissing the new thing. And when you talk about that, you can’t help but recall the Palm CEO’s reaction to the launch of the iPhone:
“We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.”
Reader, they just walked in.
Some of you might not even remember Palm… (It was eventually bought by HP, and then shut down.)
We, through experience, have swung from under-predicting technologically-driven change, to potentially over-predicting it. We are overcompensating. How do we get back in balance?
You can’t see land when you’re trapped in a storm
First, we need to understand that there’s a reason people get it wrong. Former tech journalist and venture capitalist Om Malik mused on this in a recent post, responding to Shumer’s thread. It’s fair to say that he wasn’t the biggest fan of the piece:
His thesis is that coding is the canary in the coal mine and other professions will follow. That’s the meta point, and I don’t disagree with it. We are going through a sea change in how we interact with information, so professions are going to change as a result. The industrialization and automation of software has been a topic of discussion since my GigaOm days. When someone claims the canary is already dead and the rest of us are next, I want to hear what they have to say. Even if it takes a dozen attempts to get through the “essay.”
Instead, he invites us to interrogate why there was such a strong reaction to the piece. What created the conditions that made it go so very viral?
This whole drama, from the viral post to the takedowns to the counter-takes, none of it is really about Shumer’s essay. What it’s about is simpler. And harder to admit. In the words of screenwriter William Goldman, “Nobody knows anything.” As I have written before, we are living in a petri dish of the future. Some of us are hopeful. Some of us are terrified. Most of us are both, often in the same hour. And into that vacuum of uncertainty there is a torrent of speculation dressed up as prophecy.
We are living in a petri dish of the future. We are in the liminal space without any sign of the exit state.
And so, how do we prepare for the future in such a time of uncertainty, when predictions are ever-shifting, and often rooted in the biases of those who make them? Well, for that, we should turn to regular NEXT keynoter David Mattin, who shared this (and not photos of his lunch) on LinkedIn:
Work that can be turned into a process — into repeatable machine technique — will be eaten by AI.
What’s left is what can never be made process. The messy human stuff. A compelling point of view, born of your real world experience. An aesthetic or moral judgement. A perspective that couldn’t come from anyone else.
Or, as he put it, “Outsize returns will flow to the weird.”
Blessed are the weird, for they shall inherit the post-AI future.
Navigating to a post-AI future
This isn’t a prediction as to what will happen on a broad scale. It’s taking a clear, hard look at the technology that’s in front of us, what it can do – and what it can’t. And working out the role for us, for living, conscious human beings, in that process.
In a permacrisis, we don’t need an imagined map of the landscape of the future. We require a clear compass to help us navigate the confusing, ever-shifting present. And so, perhaps we should pay a little less attention to those who are rushing to fill the knowledge void with their own hot takes, and a little more to the flawed reality of whatever emergent tool we’re talking about.
Let’s give the final word to Om Malik:
Every cycle produces its prophets and its skeptics. And every cycle, the reality turns out to be different from what either camp predicted. Not better, not worse. Different.