Digital Fix – Fix Digital
By Martin Recke
13/04/2018 | Earlier this year Arianna Huffington gained much attention with her essay summing up signs of The Great Awakening in the tech industries. The piece is a good but rather depressing read. And it is only one of many voices from the digitally driven world who analyse and describe developments around the world that were unforeseen – some even unintended – and that now need some fixing, it seems.
Last year, at the NEXT Conference — as well as on this blog — we examined in great depth why digital sucks. This year, it’s time to discuss how we are going to fix digital before it fixes us.
Before we can do this, we need to take a closer look into the mirror and scrutinise our digital egos. Are we still human? Or do we really believe that in some not so distant time AI will supersede human beings as the highest form of life on earth?
This would imply that there’s nothing genuinely human that couldn’t be automated or at least simulated by machines. To us, this sounds like a nightmarish dystopia.
Why on earth should we create machines that could possibly enslave the entire human race? But hey, that’s what we already did with the creation of Facebook, at least in parts. Or, to be more general, hasn’t the entire internet turned into that human-enslaving machine? Did we already hand over our minds to AI? There is a fine line between our dependence on a necessary infrastructure and a drug. The internet was intended as the former, but has turned into the latter for many.
One of the early prophets of doom was author Jonathan Franzen, who saw things coming for a while. Here’s a quote from The Atlantic (2013):
In the mid-’90s, when I started to feel worried about what was happening to literature with the introduction of the third screen, and with the increasingly materialistic view of human nature that psychopharmacology was producing, I was looking for some way to describe how technology and consumerism feed on each other and take over our lives. How seductive and invasive but also unsatisfying they are. How we go back to them more and more, because they’re unsatisfying, and become ever more dependent on them. The groupthink of the Internet and the constant electronic stimulation of the devices start to erode the very notion of an individual who is capable of, say, producing a novel. The phrase I reached for to describe all this was “an infernal machine.” Something definitionally consumerist, something totalitarian in its exclusion of other ways of being, something that appears in the world and manufactures our desires through its own developmental logic, something that does damage but just seems to keep perpetuating itself.
In short, technology married consumerism and transformed itself into an infernal machine. This techno-consumerism now eats up our lives. We live in a world where technology is worshipped as a solution to each and every problem. Silicon Valley has already started to institutionalise its religious beliefs. For lack of a better term, Swiss entrepreneur Wolfram Klingler calls this newfangled religion Digitalism. Author and academic John Naughton speaks of the Church of Tech(nopoly).
One of the central topics of religion is the relationship between freedom and dependence. To properly acknowledge our dependencies is a prerequisite for freedom. Otherwise, we get enmeshed in an epic struggle with the unknown. For any addict, the inevitable first step is to be honest about the addiction. We need to admit that we are powerless over our digital fix.
But what then? The classic twelve-step programme refers to a higher power that could restore us to sanity. Where do we find this higher power? In times of the internet, probably the most powerful technological creation of all time, that’s no easy task. Besides God, possible candidates include nature, consciousness, existential freedom, or science. Regardless of which higher power we choose to refer to, we enter the realm of belief.
And that’s where the problem is. The filter bubbles of Google and Facebook constantly confirm our already existing beliefs, as venture capitalist and early Facebook advisor Roger McNamee points out:
The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them. Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts.
That’s interesting to note. Obviously, the technological evolution has forced us to rethink the big questions of life, including our beliefs. That’s a good thing, as a first step. But all the necessary reflection alone doesn’t solve our problems. If we don’t want our own artefacts — like technology, the internet, Facebook, or AI — to become our overlords, we need some kind of intervention. Otherwise, things might get out of control.
Or perhaps they already are, and we have to restore control. The internet itself was designed to be as decentralised as possible, to work without a central authority where feasible, and to spread control as evenly as imaginable. It is a great irony that the internet gave rise to a bunch of the most powerful companies in history: Google, Apple, Facebook, and Amazon. In short: GAFA. Not to mention other global players like Tencent or Alibaba.
The heated political debate after the 2016 US election demonstrated a growing willingness of the political system in general to intervene. Marketing professor Scott Galloway postulates that the GAFA companies should be broken up. They have become too powerful and they threaten, not enhance, capitalism, he argues. Even tech analyst Ben Thompson occasionally pleads for regulation.
The internet was built upon a certain ideology, sometimes called the Californian Ideology. It has a strong libertarian element that is still prevalent in the higher ranks of the tech industry today, cf. Facebook’s old mantra move fast and break things. Another, somehow paradoxical aspect is the technological determinism of Silicon Valley – the assumption that a society's technology determines the development of its social structure and cultural values. Increasingly, that is exactly how our world looks, but shouldn’t it be the other way around?
As early as 2011, filmmaker Adam Curtis concluded that the Californian Ideology failed to live up to its claims:
The original promise of the Californian Ideology was that the computers would liberate us from all the old forms of political control, and we would become Randian heroes, in control of our own destiny. Instead, today, we feel the opposite – that we are helpless components in a global system – a system that is controlled by a rigid logic that we are powerless to challenge or to change.
It is time to abandon the ideologies of Silicon Valley. Their key assumptions include that digital equals progress, and that the human brain is (or at least works like) a computer. Neither of these is true. Over the past few years, we’ve learned the hard way that digital technology is just a tool like other tools and can be used for all kinds of purposes, be it progressive or reactionary. This means that we, as the digital industry, are a means to an end; and as such are not exempt from scrutiny should the ‘end’ be seen as controversial.
The second assumption is even more critical. The human brain as a computer is a powerful metaphor, but it is utterly wrong and should be abandoned as soon as possible, psychologist Robert Epstein asserts. This metaphor helps us neither to better understand how our brain works nor to more thoroughly inform our technological progress. Instead, it fosters the view of humans as entities that can (and possibly should) be emulated, replaced and superseded by machines that do everything humans can do – but better, faster, and cheaper.
It is more helpful to see humans and machines at two opposite ends of a spectrum. Computers are very good at things humans are bad at, and vice versa. In this view, the singularity (the merger of humans and machines) becomes very unlikely. Thus, the whole singularity train of thought is flawed and will inevitably end in a train wreck. Bye-bye Ray Kurzweil, see you in 2045.
And what’s more, the notion that smart technology will soon take over most, if not all, of our jobs, requiring a universal basic income to tranquilise the masses of unemployed people, has a strong odor of totalitarianism. In The Origins of Totalitarianism, published in 1951, political theorist Hannah Arendt wrote: “We may say that radical evil has emerged in connection with a system in which all men have become equally superfluous.”
The libertarian ideology of Silicon Valley has a dark, authoritarian side. It can perhaps be described as a new high modernism without strong state authority, while retaining the unfaltering confidence in science and technology as a means to reorder the social and natural world.
None of these attempts, neither totalitarianism nor high modernism, ended well. So why not learn from history?
How to Fix the Future is the latest book by Andrew Keen. He profiled himself early on as a digital sceptic and internet critic (and was ridiculed a lot for that). Thanks to his classical education, he was able to see some dangerous trends at least ten years earlier than others. In three books, he explained how today's internet is killing our culture, how the social revolution is dividing, diminishing, and disorienting us, finally stating that the internet is not the answer.
And he remains one step ahead. To solve the myriad of problems the digital world faces in 2018, Andrew refers to a humanist approach – smart human beings instead of smart technology. Again, this is probably anathema to the church of tech. This religion basically believes that every problem can and should be solved by technology. Quite the contrary, digital humanism instead puts human beings (the people formerly known as users) first and restores tech to its proper role as a means to an end.
This of course requires a definition of what is human. And that itself isn’t self-evident. Today, we oftentimes think either in terms of a coming singularity of human and tech, or of a perpetual duality between humans and machines – and also between the tech industry and the rest of the world, as Keen notes. Understanding who we are requires knowing where we come from. And that look in the rear view mirror has been done before.
The 14th century humanists started a cultural revolution that peaked in the Renaissance era. They reconnected with the human heritage from the ancient world, repurposing it for the challenges of their times on the brink of early modernity. This movement was closely related to education – in fact what we still call classical education traces its roots back to Renaissance Humanism. What we need today is a new cultural revolution, a new Renaissance, a Digital Humanism.
But there is no silver bullet, no single solution. Neither the free market on its own nor government regulation alone can fix the problems the digital revolution has created, Andrew Keen writes:
What’s needed, instead, is a strategy combining regulation, civic responsibility, consumer choice, competitive innovation, and educational solutions. It was this multifaceted approach that eventually fixed many of the most salient problems of the industrial revolution. And today we need an equally combinatorial strategy if we are to confront the many social, economic, political, and existential challenges triggered by the digital revolution.
He sums up his five bullets for fixing the future as follows:
- Government or legal regulation
- Competitive innovation
- Social responsibility by citizens
- Consumer choice
It’s important to note that all five strategies need to be combined for a successful change. We need a fine balance of regulation and innovation as well as a redesign of education with new ways to constantly learn ourselves and educate next generations in ways that reflect the world we will be living in. But the key factor will be how we take responsible actions as social beings, as company leaders, as citizens and as consumers. To paraphrase JFK: Ask not what your digital world can do for you — ask what you can do for your digital world!
In September 2017, some 150 people, from entrepreneurs to designers to philosophers, signed The Copenhagen Letter. As of April 2018, there are more than 4,200 signatures on it. The letter, addressed to everyone who shapes technology today, is a short, concise manifesto that covers many of the topics we’ve discussed so far. Here’s how it starts:
We live in a world where technology is consuming society, ethics, and our core existence. It is time to take responsibility for the world we are creating. Time to put humans before business. Time to replace the empty rhetoric of “building a better world” with a commitment to real action. It is time to organize, and to hold each other accountable.
The letter then moves on with five bullet points, stating that:
- tech is not above us
- progress is more than innovation
- we need digital citizens, not mere consumers
- design for addiction, deception, or control will not be tolerated
- human-centred design must be replaced by humanity-centred design.
The text even calls for a new Renaissance. How cool is that? You can sign the letter here.
In November 2017, Rob Girling and Emilia Palaveeva called for humanity-centred design and outlined how the design discipline should move on from the human-centred design paradigm. Their question is: Are we designing a world we all want to live in today and tomorrow? In February 2018, Tristan Harris announced a newly formed coalition of technologists called the Center for Humane Technology, rebranding their Time Well Spent initiative, founded three years ago.
The Center touts humane design as the solution. The differences, if there are any, are subtle. So maybe we should add a sixth bullet point to the list: humane (or humanity-centered) design. And while we are at it: Why not redesign the entire internet, i.e. the technological platform? We’ll need engineering and, as well as design, also regulation, civic responsibility, consumer choice, innovation, and education.
To put it in product design terms: We need a proper product management for the internet. But who is going to be the product owner? The only possible answer is: we, the people.
For way too long now, the digital industry has failed to notice that it causes disruption not only of outdated business models but also of the very fabric of society. This must change, and it will be changed, sooner or later, one way or the other. Yes, society will indeed change profoundly through the digital revolution, and it already has.
That's inevitable, and not necessarily a bad thing. But no, the direction of change is not predetermined by some integral forces of technology itself. We, the human beings, need to be at the helm of technological innovation, to understand what's happening and to drive the change.
That's our social responsibility. And that’s what makes us human in the first place.