Here’s a question not enough people are asking: What’s the point of a technology that doesn’t serve humanity?
Obviously, the reason we should be asking that about AI is the way it’s being sold to us. The AI companies tell us that their models can do jobs that people are doing right now. And independent bodies are tending to agree. For example, the International Monetary Fund:
With nearly 40 percent of global jobs exposed to AI-driven change, concerns about job displacement and declining opportunities for some groups are becoming more acute. This underscores the need for proactive and comprehensive policymaking that prepares the labor force for the future of work and ensures the gains from AI are broadly shared.
You see, they’re asking the right questions, as that last sentence indicates. How many other people are?
Who is in charge? The AI or the human?
Here’s another question: Does the technology serve us, or do we service the technology?
Cory Doctorow, writer, thinker and previous NEXT speaker, summed this problem up in his usual style (he was responsible for one of the most memorable NEXT closing keynotes…), by categorising people as centaurs or reverse-centaurs, based on how they use AI:
In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.
And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
He cites the example of a delivery driver, so beholden to the deadlines set by the computer that they end up having to urinate in bottles in their van.
I bet you can see where Doctorow’s going with this…
Obviously, it’s nice to be a centaur, and it’s horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be.
But increasingly shitty jobs, where we serve the AI, is not the big danger. It’s that a tiny group of people get those jobs — and the rest of us lose ours.
An easy look in the AI crystal ball
It doesn’t take an academic economist to see some of the obvious problems with the idea that AI will render great swathes of the workforce obsolete. Obviously, that can be a very attractive idea if you’re a senior manager at a big company, with one eye on the balance sheet and a conscience unburdened by empathy…
But the question then becomes: Who is going to buy the products that the AI create if hardly any people have jobs?
This question is too often answered with “look how good our new model is!” or “if we don’t do this, China will!”.
Lessons from the recent social media past
Of course, ignoring the consequences of your technology is not the same thing as addressing them. Look, we were wrong about social media. By “we”, I mean those of us who were sucked into a techno-utopian view of the future in the late 2000s and early 2010s. By connecting everyone, we’ll make everything better! Yay!
Or, alternatively, we’ll let disinformation slide around the world effortlessly, causing increasing polarisation, democratic backsliding and war. This time last year, the book Careless People hit the bestseller lists because it painted a grim portrait of a company utterly unwilling to properly address the impact its product was having on the world. The product was Facebook.
Today’s AI companies seem to be walking a similar path. So, that’s where the rest of us have to step in: the writers, the trend watchers, the consultants. Our job is to keep our eyes fixed firmly on the reality of the technology, and make some sensible predictions about the future.
What’s the point of a human-less economy?
For example, if you extrapolate outwards from the more extreme predictions, you end up with the sorts of dystopian visions we’ve discussed before: a large unemployed underclass and an AI-wielding elite, existing almost in separate economic bubbles.
We know how situations like that tend to resolve, historically speaking: bloodily.
The standard economic response to that is that, by freeing up labour, automation creates new jobs. The canonical example is the automation of agriculture. Before the Industrial Revolution, between 80% and 90% of people globally worked in agriculture. We needed that many people just to keep everyone fed. But automation changed that. Today, in the US, less than 2% of the workforce is in agriculture.
Automation freed up human time from bare survival to other things. However, this revolution is different. We’re not freeing up people from bare subsistence jobs – we’re taking away employment that facilitates flourishing, either through enjoyable work or through economic power that allows people to enjoy the rest of their lives.
The big assumption: AI will facilitate new job creation
We assume that new jobs will come along. For example, the World Economic Forum predicted exactly that in its Future of Jobs Report 2025:
Extrapolating from the predictions shared by Future of Jobs Survey respondents, on current trends over the 2025 to 2030 period job creation and destruction due to structural labour-market transformation will amount to 22% of today’s total jobs. This is expected to entail the creation of new jobs equivalent to 14% of today’s total employment, amounting to 170 million jobs. However, this growth is expected to be offset by the displacement of the equivalent of 8% (or 92 million) of current jobs, resulting in net growth of 7% of total employment, or 78 million jobs.
So, a net gain of 78m jobs. Yay? Well, it depends. This is built on the assumption that those new jobs will arise. But that’s difficult to see happening in an economic environment where all the money is going into the job-destroying tech. In 2025, 61% of venture capital money went into AI tech.
Right now, the market shows no signs of solving this problem for us. And that’s not just a problem for us: it’s a potential problem for the AI companies themselves. Nobody wants to end up in the history books as the people who destroyed the economy, unleashed mass unemployment, and possibly violent revolutions. If you – and they – want a sustainable tech revolution of any kind, without provoking that bloody revolution, you need to keep humans at the heart of your thinking.
The bleak emptiness of AI art
And part of that is acknowledging the limitations of what AI can do. Take AI art — it’s often slightly eerie. Doctorow again:
AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it’s missing something. It has nothing to say, or whatever it has to say is so diluted that it’s undetectable.
It’s possible people will look back at the 2020s and see an aesthetic dominated by the slightly off results of generative art. It will become an era-defining aesthetic in much the same way some of us look back at our photo libraries from the early 2010s, and regret the amount of filtered Instagram photos in there…
But AI art is also abundant. I could have generated thousands of pieces by propping Midjourney, rather than by writing this article. Low cost of production tends to generate abundance, which tends to depress prices. And so, eerie, AI-generated art is likely, by sheer volume, to increase the demand for the relatively scarce human-created art, with a sense of connection, authenticity, and narrative.
Beyond the obvious post-AI futures
But that’s an obvious response, rooted in the values of the past. The new work that is likely to emerge is probably unimaginable to us right now, just as a farm labourer swinging his scythe during harvest could not have imagined the office jobs his great-grandchildren would be doing.
So, how do we start our journey towards the future? Well, maybe we need people as our compass, and possibly even the much-maligned Gens Z and Alpha. Another former NEXT speaker, Pamela Pavliscak, has been watching them with interest:
Online, we are constantly identifying, naming, remixing, and expressing new feelings. While we tend to think all the latest memes and trendy phrases are frivolous, they are actually emotional intelligence in action. If we can name our emotions more precisely, we tend to feel less alone.
AI is not inevitable. The technology exists, and can’t be uninvented. But the way we integrate it into our lives, our work and our societies is still a matter of choice.
Do you want Luddites? That’s how you get Luddites.
We could choose the path of the Luddites, who protested the automation of the weaving trade. But even they are not as they’re sometimes portrayed:
The Luddites were not, as has often been portrayed, against the concept of progress and industrialisation as such, but instead the idea that mechanisation would threaten their livelihood and the skills they had spent years acquiring. The group went about destroying weaving machines and other tools as a form of protest against what they believed to be a deceitful method of circumventing the labour practices of the day. The replacement of people’s skilled craft with machines would gradually substitute their established roles in the textile industry, something they were keen to prevent, rather than simply halting the advent of technology.
And so you can avoid the creation of 21st century Luddites by looking at AI as something more interesting than a tool for culling your workforce. You can turn your workforce into centaurs, by exploring ways they can do their job better. Or you can turn them into reverse centaurs, doing the drudge work of checking that the AI isn’t messing up jobs humans used to do.
This is the difference between substitution, where you replace a human with a machine, and experimentation, where you work out what you can now do that you coudn’t before because this new technology exists.
The liminal state of experimentation and iteration
Pavliscak again, talking about navigating social relationships as technology changes:
It might be awkward, and it’s definitely iterative, but we are adapting. We are learning how to change the vibe of a group chat, how to set boundaries without disappearing entirely, how to relate to AI tools without outsourcing our humanity.
Avoid Luddites. Make centaurs, not reverse centaurs. And experiment with what new things emergent technology can allow humanity to do. And that’s your job. And my job. And your firm’s job. And your government’s job.
Because our societies exist for the betterment of human, not of technologies. And we fail in our most basic of callings if we let the adoption of this technology be any different.