2025: The year we stopped pretending

Everything is ending at once – work, truth, screens, and even certainty itself. But we’re finally mature enough to see endings as thresholds. Here’s what we’ve learnt.

The year 2025 marked something more significant than another annual increment. Standing here in December, looking back at what we’ve explored throughout the year, a pattern becomes unmistakable: we’ve stopped pretending. The comfortable narratives about inevitable technological progress, about markets naturally optimising for human good, about democracy on autopilot – they’ve all revealed themselves as the fictions they always were.

This isn’t pessimism. It’s clarity. And it’s what made our September gathering in Hamburg feel less like a typical conference and more like a collective reckoning. When we chose “This is the End” as our theme, we weren’t being dramatic. We were naming what 2025 has made impossible to ignore.

The AI anxiety we’re finally voicing honestly

For years, we’ve danced around the central question about artificial intelligence and human work. We’ve talked about “augmentation” and “human-AI collaboration” and “the future of work” using language designed to reassure rather than confront. But 2025 forced a more honest conversation.

The performance curves are undeniable now. What we’re seeing with AI isn’t incremental improvement – it’s exponential acceleration that’s still slow enough we don’t feel it day-to-day, but fast enough that the boiling frog metaphor has become uncomfortably apt. AGI isn’t science fiction anymore. It’s a question of when, not if.

So what happens to human meaning when machines can handle most cognitive work? This is the question that haunted our year, surfacing in different forms across every exploration we undertook as AI transformed society’s foundations. David Mattin articulated the anxiety directly:

Our role in the economy was to change scarcity into abundance. We turned that into the purpose, the meaning of us. Religion faded away, and we turned our jobs into our meaning. When machines can do that, what is our meaning then?

The answer we kept returning to surprises precisely because it’s so fundamental: humans matter for what machines can never replicate – being seen, recognised, and empathised with by other human beings.

What AI can’t replace in human work

The neuroscience backs this up in ways that feel almost evolutionary. When people synchronise through eye contact, singing, or movement, their brain waves align into something greater than individual intelligence. These collective “super-brains” generate insights and solve problems in ways that fundamentally differ from how AI operates. Not better or worse – different. This is what humans bring that silicon cannot: the electrical synchronisation of nervous systems in shared space.

As Hannah Critchlow explains:

We are in mid-evolutionary transition from an individual past, to a group, super-brain future.

The implications ripple outward. Our exploration of creativity in the AI age revealed that when AI handles production, human value shifts to something else entirely: the quality of inputs we provide. Curation. Intention. Emotional resonance. The vulnerable collaboration that emerges from uncertainty rather than planning for certainty.

Thomas Knüwer captured it perfectly:

By creating certainty, we are deleting options. Should the output always be the starting point? Maybe our starting point should be uncertainty, with a range of possible inputs.

This is the mature conversation about AI and human work we needed five years ago, but are finally having in 2025: not whether machines will replace work, but what becomes essential when they do. Not whether AI can be creative, but what kind of creativity only humans can generate. Not whether to fear or embrace the technology, but how to ensure it serves human flourishing rather than replacing it.

When screens reach their limit

Perhaps 2025’s most unexpected revelation was recognising that screens – those portals that defined digital life for three decades – have reached their evolutionary endpoint. Not that they’re disappearing, but that the next leap forward won’t come from making them bigger, brighter, or higher resolution. Las Vegas’s The Sphere, wrapped in 360-degree LED, paradoxically demonstrates the limit: we’ve made screens as large and immersive as possible whilst still keeping them screens.

What comes next involves moving beyond visual display into embodied interaction. Mixed reality headsets, movement-tracking rooms, spatially aware AI companions – these aren’t just new gadgets but a fundamental shift in how we interface with digital reality. The paradigm is moving from observing to inhabiting, from watching to doing, from individual screens to shared space.

Adrian Hon’s insights on immersion contextualise this historically. The human hunger to be transported beyond everyday reality isn’t new – it stretches from 19th-century panoramas to contemporary LARPs. What’s changed is technological capacity. Today’s immersive environments promise movement, agency, transformation. Where last century’s media meant sitting and watching, what’s valued now is moving, touching, collaborating.

Even our examination of robotics centred on this embodied shift. Robots with a sense of touch doing massage therapy. Care robots that respond naturally to being led by the hand. These aren’t just automated tools but new forms of physical interaction, interfaces that feel like collaboration rather than operation.

The return to physical presence

This embodied turn makes philosophical sense. If AI colonises cognitive labour and screens saturate visual attention, what remains scarce is precisely what cannot be digitised: physical presence, synchronised movement, the unmistakable feeling of bodies in shared space. The super-brain requires actual brains in proximity, not avatars on screens.

The data confirms this shift is already happening. Nearly half of people are spending more time outdoors and in face-to-face interactions – what some call “social rewilding”. This isn’t digital detox or neo-Luddism, but a recalibration toward authentic, tactile experiences that screens simply cannot provide. The trend suggests we’re collectively recognising what our neuroscience already knew: humans need actual presence, not just digital connection.

The systems we haven’t built

But here’s where 2025’s clarity becomes uncomfortable. For all our sophisticated thinking about human meaning and embodied experience, we keep running into the same wall: the gap between what’s technically possible and what’s politically achievable. This gap defines modern society’s core challenge.

Our analysis of the energy transition demonstrates this perfectly. The technology exists. Solar and wind are cheaper than fossils in most markets. AI infrastructure demand is paradoxically accelerating the build-out of clean energy systems. Chinese manufacturing has made supply abundant. Market forces are driving real transformation.

Yet energy remains expensive in ways that fuel political rage. Housing becomes increasingly unaffordable. Infrastructure crumbles. Why? Because we possess technical solutions but lack political capacity to implement them at scale. We can build the abundance we need – we’re choosing not to, through regulatory inertia, incumbent resistance, and bureaucratic dysfunction.

The political capacity gap

This pattern repeats across domains. Healthcare breakthroughs are happening faster than expected – three to five years for major advances, not decades. AI-enabled personalised medicine could dramatically improve outcomes through early detection and tailored treatments. We could sequence every newborn’s genome and intervene proactively rather than treating late-stage disease. The economics increasingly favour prevention over treatment.

So why aren’t we doing this? The barriers aren’t technical but political: who controls genetic information, how we prevent discrimination, whether we value health as public good or private commodity. These aren’t problems AI can solve. They’re choices societies make – or fail to make.

The website’s uncertain future captures another dimension of this. Brands are losing 40% of traffic to AI intermediaries that keep users in their environments. Generative AI creates summaries rather than driving clicks. We’re evolving towards a dual-layer internet – one for humans, one for machines – but who controls the agents mediating access? Who ensures AI represents brands and information accurately rather than hallucinating convenient fictions?

The technical solutions for verification exist: content provenance systems, digital watermarking, structured data standards. What’s missing are governance frameworks for the AI agents themselves – standards for accuracy, accountability mechanisms when they hallucinate, institutional capacity to ensure technology serves stated values rather than concentrating power in whoever builds the infrastructure.

The endings we’re living through

2025 has been the year when multiple crises we’d been tracking separately revealed themselves as one interconnected rupture. The post-war order, neoliberal economics, shared truth – we’ve documented their individual collapses, but what became clear this year is how they reinforce each other’s dissolution.

Take the geopolitical shift. Saudi Arabia and the UAE joined BRICS in January 2024, but the full implications only crystallised in 2025. We’re not moving from unipolarity to a new stable arrangement, but into something more fluid and contested. The Global South isn’t picking sides in a new Cold War – they’re demonstrating that the era of being forced to choose is over. This isn’t chaos; it’s a different kind of order, one we don’t yet have the vocabulary for.

When abundance meets dysfunction

The economic dimension reveals itself in daily frustrations. Technical capacity without political will to implement it has become the defining feature of governance in developed economies. Market mechanisms drive real improvements whilst simultaneously concentrating wealth in ways that destabilise the politics needed to sustain those markets. It’s a self-undermining system, and 2025 is the year we’ve stopped pretending otherwise.

Most unsettling is watching truth itself become contested territory. Nina Jankowicz’s documentation of “LLM grooming” – adversaries deliberately corrupting AI training data – represents information warfare at a scale and sophistication society remains unprepared for. When Portal Kombat’s network can pump millions of articles into the digital ecosystem specifically to mislead machine learning systems, we’re beyond “fake news” into something more fundamental: the weaponisation of the infrastructure we use to determine what’s real.

These aren’t separate crises requiring separate solutions. They’re manifestations of a deeper transition – what Gramsci called the interregnum, that space where old systems have failed but new ones haven’t yet formed. The question 2025 forces is whether this liminal state is temporary or permanent, and how we maintain agency within it either way.

Living in permanent liminality

Antonio Gramsci wrote in his Prison Notebooks:

The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.

What 2025 has taught us is that the interregnum isn’t a waiting room for some inevitable new order. It might be the order itself.

Thriving in turbulence

This realisation shifts everything. If we’re not transitioning to something but learning to operate within ongoing turbulence, then the skills we need change fundamentally. We can’t keep planning for stability that won’t arrive. We can’t keep optimising systems that are actively transforming. We can’t keep waiting for certainty before we act.

Instead, we’re learning what Thomas Knüwer identified in creativity applies everywhere: start with uncertainty, not planned outputs. The old consulting model of “assess, plan, execute” doesn’t work when systems are transforming faster than planning cycles.

Organisations navigating extreme uncertainty in 2025 are those who’ve developed reflexes for rapid sensing and response. They probe, learn, adapt. They run multiple experiments in parallel. They expect failure and learn quickly. They’ve internalised that perfect information never arrives, so they’ve learnt to move with imperfect information instead.

This is what our exploration kept returning to: endings create thresholds, and thresholds are where the interesting work happens. Not in the old room we’re leaving, not in the new room we haven’t reached, but in the doorway itself. Learning to inhabit that space productively – to build, create, connect, and maintain agency whilst everything shifts – these are the capabilities that matter now.

Paradoxically, this requires both accepting impermanence and creating moments of closure. Joe Macleod’s observation captures why this matters: digital life’s endless loops – notifications, feeds, open conversations – erode our capacity for completion. When nothing ends, nothing begins either. We need rituals of closure precisely because everything else stays open. These endings, small and deliberate, create islands of completion in a sea of continuity. They’re how we maintain sanity and agency when permanent liminality threatens to dissolve both.

The distribution question we keep avoiding

But we need to be more honest about something we acknowledged but didn’t fully grapple with: abundance, if it arrives, won’t distribute itself equitably. Market forces alone have never ensured fair distribution. That required political struggle, democratic governance, and institutions we’re now told are exhausted.

When we say “we need to share the abundance machines create”, who exactly is “we”? What institutions make these decisions? Through what processes? With what enforcement mechanisms? Democracy is under unprecedented strain – institutions designed for 19th-century problems trying to handle 21st-century complexity at speeds they weren’t built for.

The faith we’ve placed in market mechanisms driving positive change contains an unexamined assumption: that economic optimisation will align with human flourishing. Sometimes it does. Cheap solar panels are genuinely good. But markets alone didn’t create weekends, child labour laws, environmental protections, public education. Those required democratic politics, precisely the capacity we’re told is failing.

Who controls the infrastructure

The risk is that the activities we claim to value – human creativity, care, connection – won’t be valued in practice as AI reshapes society’s economic foundations. That the “caring economy” becomes low-wage service work whilst AI-generated content captures commercial value. That digital twins of our bodies get owned by whoever builds the infrastructure. That AI agents mediating our reality serve their creators’ interests rather than ours. The Big Tech oligarchy didn’t emerge overnight – it’s the culmination of decades of generous regulation and concentrated power.

This isn’t technophobia. It’s recognising that technology reflects and amplifies existing power relations unless deliberately structured otherwise. And in this transitional moment, when old institutions are failing and new ones haven’t formed, the default is: whoever builds the infrastructure controls it. We’ve long argued that the digital sphere needs democratic governance – not digitising democracy, but democratising digital. Yet in 2025, Big Tech still creates the rules others must follow, with minimal democratic oversight.

What 2025 demands of us

If 2025 has taught us anything, it’s that we need new analytical frameworks adequate to our moment. The old binaries don’t work: optimism vs. pessimism, accelerate vs. slow down, embrace tech vs. reject it. As David Mattin argued at NEXT22, the traditional split between conservatives and progressives is meaningless when inaction itself becomes a choice for uncontrolled disruption. Reality is more complex, more systemic, more deeply interconnected than these frames allow.

Systems thinking becomes essential. Understanding how AI’s energy appetite paradoxically accelerates clean energy infrastructure – that’s second-order thinking we need more of. Recognising that robotics barriers involve cost, safety, usability, and trust, not just technological capability – that’s implementation thinking versus possibility thinking.

Holding tensions, not resolving them

We need to hold multiple truths simultaneously. AI threatens human relevance and liberates humans for more meaningful work. Market forces drive real improvements and concentrate power dangerously. Technology enables connection and fragments shared reality. Democracy provides legitimacy and struggles with 21st-century speed. These aren’t contradictions to resolve but tensions to navigate.

Rebuilding institutional capacity

Most urgently, we need to rebuild institutional capacity for collective decision-making. We shouldn’t restore old institutions – they’re failing for structural reasons, not temporary ones – but invent new forms adequate to our challenges. Indy Johar argues:

We need the courage to admit that democracy, in its current form, is broken – not to abandon it but to reforge it as a living capability of society rather than a static system preserved through nostalgia.

This means experimenting with new democratic forms: citizens’ assemblies, participatory budgeting, liquid democracy, distributed governance. As Jon Alexander explored, we need to shift from the consumer paradigm to the citizen paradigm – from passive consumption to active participation. It means building institutions that can perceive, decide, and act at the temporal velocity of 21st-century problems. It means, perhaps most difficult, developing collective capacity to make choices knowing we won’t get them entirely right.

Yet even as we sketch these directions, we must acknowledge what remains unresolved. Throughout this exploration, three critical gaps have surfaced:

  • Power and distribution: We’ve documented how abundance won’t distribute itself equitably, but the mechanisms for ensuring fair distribution remain underspecified.
  • Governance frameworks: We’ve identified the need for new democratic forms adequate to technological velocity, but haven’t detailed how they’d function in practice or gain legitimacy.
  • Non-Western perspectives: Whilst we’ve noted the multipolar shift and China’s manufacturing dominance, we’ve insufficiently engaged with alternative frameworks and innovation centres beyond Western contexts.

These aren’t failures of analysis but frontiers requiring sustained work across institutions, geographies, and disciplines.

The perspective we’re choosing

Our year-long exploration revealed something we didn’t fully expect: the most important choice in 2025 isn’t about technology, but about perspective.

We can see endings as failures – the collapse of systems we invested decades building. Or we can see them as thresholds – portals to possibilities we couldn’t access whilst clinging to what no longer works. We can fear permanent liminality or learn to make it generative. We can despair at institutional failure or seize the space to build better ones.

The technology isn’t the hard part anymore. We have the technical capacity for clean energy abundance, for healthcare that extends healthy lifespan, for AI that augments human work rather than replaces it, for immersive experiences that enrich rather than isolate. The hard part is political: ensuring abundance flows equitably, maintaining democratic legitimacy whilst adapting democratic forms, building the governance capacity to make collective choices at the necessary speed.

What 2025 has given us is clarity about what’s at stake. The energy transition revealed markets can drive real transformation – but technical solutions mean nothing without political capacity to implement them at scale. As Andrew Keen has long argued, we can’t pretend technology inevitably delivers progress or that markets naturally optimise for human good. We can’t assume old institutions will handle new challenges or that simply rejecting technology is an option.

The question – the only question that matters – is what we build in this threshold space, whether we like being here or not.

Beginning where we are

And so, we stand at the end of 2025 with an unusually clear view of our condition. Multiple systems are ending simultaneously. New ones haven’t yet crystallised. This interregnum might prove permanent. The morbid symptoms Gramsci warned about are multiplying.

But endings, we keep insisting, are also beginnings. What we begin depends entirely on the choices we make at the threshold – and whether we have the courage to make them collectively rather than letting them be made for us.

As we concluded earlier this year:

The experiment is underway. We are the experiment.

We don’t have maps for the territory ahead, but we’re learning to navigate without them. We’re developing frameworks for operating in radical uncertainty, for building whilst the ground shifts, for preserving human values amid technological transformation.

This is the work of 2026, and beyond: not predicting the future but creating conditions for futures we can live with. Not restoring what was but discovering what might be possible after the old certainties end. Not achieving perfection but maintaining possibility within chaos.

The digital age is ending. Something else is beginning. We get to help shape what that something is – but only if we abandon the illusion of certainty and start building the capacity to discover answers together.

That’s the perspective 2025 demands. That’s the work ahead. That’s what begins when everything ends at once.

Photo by TungArt7/ Pixabay.