The Digital Fixers that will stop us repeating tech’s mistakes

The obvious answers and fixes are not working. We need to think more deeply about building a positive, sustainable digital future than today’s wave of bandaids on a deep wound.

12 years ago, when Jack Dorsey typed the words “just setting up my twttr” into the new status service he was building in the offices of Ev William’s failing podcasting service Odeo, there is absolutely no chance that he thought that action would eventually lead to him testifying before a Senate committee in the US, exploring the role of his service in manipulating elections.

Seeing Facebook, Twitter and (symbolically represented by an empty chair) Google dragged before government to explain themselves is as powerful symbolism of the mess we’ve made of digital as you could need. And the fixes to date are NOT working.

We’ve also seen that just replicating the functions of social media sites in a more open way is not, in of itself, enough. There’s been plenty of buzz around a service called Mastodon, which pretty much replicates what Twitter does, but in an open, federated way. Anyone get setup and run a Mastodon instance, if they have a server and the technical nous to do so. But all the servers talk to each other. No-one “owns” Mastodon in the way they do Twitter.

That open, federated structures have done nothing to prevent the same sorts of social media mobs we’ve seen on Twitter, though. Actor and online personality Will Wheaton was driven off the service by a concentrated action of people who disliked some of his past actions.

The obvious answers and fixes are not working. We need to think more deeply about building a positive, sustainable digital future than today’s wave of bandaids on a deep wound.

We need digital thinkers capable of thinking of fixes on a bigger scale.

Andrew Keen

One of our opening speakers Andrew Keen has a decade-long history of being sceptical about the internet, from The Cult of the Amateur to The Internet is Not The Answer, he was often a derided voice for questioning the techno-Utopianism of the day. He now seems more like a Cassandra, predicting a future that would come to pass, but never believed.

But now, as events prove him ever more perceptive he’s taken the opposite course of writing a book about fixing the future.

As he puts it in the book’s introduction:

This is a maybe book, based on the belief that the digital revolution can, like the industrial revolution, be mostly successfully tamed, managed, and reformed. It hopes that the best features of this transformation—increased innovation, transparency, creativity, even a dose of healthy disruption—might make the world a better place. And it outlines a series of legislative, economic, regulatory, educational, and ethical reforms that can, if implemented correctly, help fix our common future.

And to fix the missteps we’ve taken, we need to think big, to make sure the next decade takes us to a better place.

Amplifying tech’s fixers

Ayesha Khanna

This is why the programme that the NEXT team have lined up for us later in the month is so important. Yes, Digital Sucks, as we proved last year. But Digital is not going away. The genie does not go back in the bottle, and people are not going to give up their phones in favour of carrier pigeons and town criers.

And so we need to amplify the voices of those who do have a way forward to a better version of digital.

For example, one of our early speakers is a critical thinking about the rapidly growing field of AI – one of those fields that if we bungle its adoption, we’ll have an even worse problem on our hands. Ayesha Khanna has been a powerful voice calling for both its adoption – but also for a sensible discussion about governance and for the involvement of a diverse range of people in the field.

And, as she outlined in an interview with Singapore’s Peak magazine, we need diverse voices in the AI discussion:

Not only is there a gap in global leadership on ethical design, governance and use of AI, what’s worse is, when they do talk about it, it’s the Western experts talking to one another. No one is asking the Chinese, Indians or Japanese how they believe AI should be governed. Yet, the impact of AI is going to be felt most by billions of people in Asia.

Practical tech diversity

Amber Case

That’s at the high-level, international governance end of the scale. But equally, we have speakers like Amber Case who will explore very practical means of building more positive products – and of making more positive, healthy choices in our own lives, as she already outlined on this blog.

She’s also been working on the future of AI, and is concerned that we are worrying about the wrong things, as she outlined in a blog post:

For a vivid illustration of AI’s actual limitations, try out any number of the computer programs which use an AI to replicate a deeply human skill — creating a realistic painting. The results are comical, often disturbing, and wildly, wildly off-base. Despite using some of the most powerful “AIs” on the market, the infantile results convey just how little we have to worry about computers replacing us anytime soon.

Instead, she is concentrating less about the AIs themselves, but in the people helping guide and shape them:

We know that machine learning works better with human collaboration. This is a concept inherent in cybernetics: A feedback loop ensuring that human input and machine input go hand in hand. When done well, it creates a “living”, more organic system.

And the range of diversity and thinking involved will have a huge impact on what comes next.

If we do this thinking now, and make some smart, considered and pragmatic decisions, maybe the next wave of tech innovators won’t be hauled up in front of government to explain exactly what the hell went wrong.


Photo by Chetan Menaria on Unsplash