One of strongest things about Dave Mattin’s piece on augmented modernity is that I can’t quite tell if it’s a utopian vision or a dystopian one. That’s a good sign, because technology rarely creates one or the other. As we keep saying, technology is just a tool, and what we do with it creates the outcomes.
Mass production was a boon in the way it lifted people from poverty and raised standards of living – but it brought consequences in terms of debt and over-whelming amounts of clutter.
The information age has been a boon in the unprecedented access to cheap communications and information we have – but it brought consequences in terms of attention overload, filter bubbles and misinformation.
The best way to make sure that an augmented modernity is more utopian than dystopian is to go in with the dangers in mind, and protect the emerging technology from them: pretty much the reverse of what we did with the internet and social media. Hopefully, we’ll get a better outcome.
If Dave is right, we’ll be exposing the very emotional and spiritual core of ourselves to technology in a way we never have. And if AR, VR and AI all come to be, they’re be part of our landscape of interaction, in a way that will reshape our brains and our life values. We will use them: the human capability for consumption of these experiences is clear: look at the rapid rise of gaming from the bedroom coders of the 1980s to the massive international industry it is now.
It seems inevitable that future generations will live in worlds that mix the physical reality we experience, with overlaid elements of human creativity. However unsettling that might seem to us pre-augmented dinosaurs, it’s bearing down on us fast.
Making this safe – or as safe as it can be — comes down to: who controls it? And do you trust them?
There’s a word for systemic trust mechanisms: regulation
Now, we put our trust in others all the time. Very time we get into a car or an airplane – or even a tall building. We trust that designers have done their best, that engineers haven’t cut corners and that the maintenance people have maintained it.
This isn’t blind trust, though – it’s maintained trust. Every profession where people’s lives are at take if their work goes wrong has some element of regulation and monitoring. It could well be that that’s the direction we have to move in with these emerging technologies.
Can we afford to have a Facebook-like situation with AI, where the power to shape the morals, attitudes and politics of the beings we interact with lies with a single corporation? No. That’s an insane amount of power in, essentially, one person’s hands. If the world Dave foresees comes to pass, we need to start putting the relevant regulation in place as soon as we can. And — right now — we need it to be present by light-touch.
The algorithms that will drive these systems are not inherently neutral – they will have the biases and politics of the people who create them encoded into them, and the more the experience of interacting with them is immersive, the more power those biases will have. We pick up politics and ethics from our social circles. We are social creatures who tend to pick up the social norms of our web of relationships. If some of those relationships are with artificial entities under corporate control, we need some system of regulation and oversight, otherwise the power of wholesale societal manipulation could fall right into the hands of a few corporations. That’s a dystopian sci-fi novel right there.
The case is already made: by gaming
With the news this week that the World Health Organisation has classified gaming addiction as a mental disorder, the case for some form of regulation seems to be in order. We’re looking at intensely stimulation activities that can be addictive, and that addiction is being exploited for commercial reasons. The tobacco industry must be looking on with a tear of pride glistening down their cheek.
The danger, as with all legislation in emerging fields, is to protect people from excesses without stifling innovation. For example, the role of “loot boxes” – rewards with unknown contents which are won or purchased in online games — has come under scrutiny in the last few months, as it’s seen as a form of gambling:
Think about how many times you’ve paid real-life money in a game for the chance to win an item you really wanted. Was it a nice Overwatch skin? Perhaps it was a coveted Hearthstone card. How many times did you not get the item you wanted, then immediately bought in for another chance to hit the big time?
This may all be for virtual items – but that doesn’t make them any less important. And the value of digital items has to be part of a legislative framework for an augmented modernity. Just because the object doesn’t have a physical existence, doesn’t mean that people don’t value – and suffer from being deprived of it. These are, remember, immersive experiences. And as we constant to augment the world around us digitally, the line between physical and digital goods will grow paper-thin.
The intensity imperative
The more intense the experience, the more we need to think about this. Take, for example, sex robots. Not a prospect I find appealing – but many clearly do:
However, sex is not a neutral act. It has impact on our body, in particular causing the release of oxytocin, to stimulate at least some level of trust and bonding between partners. Although the role of sex-triggered oxycontin in human park bonding has been somewhat oversold, often for religious or morels reasons, over an extended period of time, it can create new feelings of love and trust. After about six weeks, new neural pathways start forming. Could people actually start falling in love with their sex robots? It’s quite plausible. If so, what are the consequences for human marriage, for long term-emotional bonding? If people have long term relationships with essentially compliant creations, how does that affect their outlook in dealing with other?
And if people choose “easier” relationships with AI-driven robots what about reproduction? We’re a long way from any sort of “lack of population” crisis and, if anything, the reverse is true. But we’re already seeing some impact of technology on human reproduction.
Our augmented utopia will break down pretty quickly without a steady supply of people to create, maintain and develop it.
Preparing for an augmented modernity
So, yes, this is all highly theoretical right now. And yes, we’re only scratching the surface of the issues that will arise if these technologies do come. If the last two decades have taught us anything, though, is that it’s better to plan for the worst, and enjoy the best, rather than the exact inverse.
A sane government — if such a thing sexists any more — would avoid the knee-jerk legislation that follows a big crisis, by proactively building small amounts of legalisation around emerging technologies. And a big crisis is almost inevitable. Maybe I’m a pre-augmented prude, but the ideas of a sex robot crisis fills me with deep unease…
Can we legislate to prevent AIs reshaping our world views and politics in ways we’re unaware of? Can we legislated to give digital goods similar protections to physical ones? Can we legislate to see online abuse treated with equal standing to physical abuse?
Right now, that seems like giving two disparate things, one of more seriousness than the other, equal standing. However, the more immersed we are in an augmented environment though, the more our brain reacts as if it were real. The distinction becomes meaningless and the harms as real.
None of us reading this might live to see the days of augmented modernity — but it’s still beholden on us to, as Dave puts it, look“at the new technologies arriving now and draw lines outwards “ and thus prepare the way for our societies to deal with it.
Martin has also written his take on augmented modernity.