Living with Algorithms: a cynical utopian’s guide
Algorithms are powerful, and getting more so as they're infused with machine learning. Can we learn from the past in how we live with them?
When I think of algorithms these days, all too often I think of ospreys. The link between the two might seem rather obscure, but bear with me on this. The connection is rooted in our tendency, as a species, to get so excited about something that we lose perspective, and fail to plan properly for the downsides.
Like many children of the 70s, I grew up in an environment where artificial was best. Forget breast milk! Formula is scientifically formulated to be best for your baby! Wipe down every surface with disinfectant, all the time! Boost your agricultural output with pesticides!
All those positions seemed logical and exciting at the time, but our experience in the years since have shown them problems with all of them. For example, pesticides built up in the ecosystem, getting more and more concentrated as they moved up the food chain, until they had the effect — in the US at least — of causing Ospreys to lay significantly weaker eggs. Those eggs cracked when the osprey tried to brood them. A catastrophic decline in osprey numbers, and those of many other species, followed, as captured by the book Silent Spring. Decades later, we’re slowly recovering our ecosystems from that incautious enthusiasm.
Power of the algorithms
And now, the power of the algorithm and machine learning is at the heart of so much of what we do. Government and corporations alike sing a familiar refrain. Do you have a problem? Let an algorithm solve it! That enthusiasm for “chemicals” fo the 60s and 70s is reborn in digital form.
And yet, the UK has given us a salutary example of what happens when you put thoughtless faith in an algorithm. Faced with the problem that school students couldn’t take their traditional end-of-school exams because of lockdown measures, the education authorities reached for an algorithmic solution, that would extrapolate “true” grades from teachers’ predictions.
It would be fair to say that it did not go well. The “rogue” algorithm, as it was termed, was quickly abandoned, and the decision-making handed back to the teachers.
Indeed, if you judge things by news coverage, algorithms are evil. Facebook’s algorithm is promoting civic division and strife!
The bright side of an algorithmic world
And yet, algorithms and their more sophisticated cousins, machine learning, have unquestionably enhanced our lives. Google’s algorithm, now informed by machine learning, has given us unprecedented access to information. My vast volumes of photos are rendered more accessible by algorithms, and editing them simplifies by machine-learning trained systems.
Anywhere where we are processing vast volumes of data, algorithms are invaluable assistants to human endeavour. But they are just that: assistants. Too often, we use them as final arbiters, with decision-making completely outsourced to the system. And that’s where things can go wrong.
We are perhaps learning quicker than we have in the past. We are subjecting social media algorithms to more scrutiny than pesticides were 50 years ago. Equally, as algorithmic trading has become central to the stocks markets, the impacts of those system-driven trading decisions are being scrutinised.
Digital cynicism and evangelism
There’s a careful line to walk here between algorithm cynicism and algorithm evangelism. We are building out the powerful tools of the digital age, enabling our silicon assistants to support us in many of our endeavours. But we need to deploy these tools carefully, with an eye to unintended consequences, and attention to where they are valuable and where they are less useful than more traditional human-centric approaches.
And we also need to remember that algorithms are not some pure, abstract form of information processing. They have our own biases, ignorance and prejudices built in. Systems trained largely on white faces have real problems recognising those of other ethnicities.
As algorithmic systems have increasing influence over our lives, we need to subject them to greater scrutiny — and criticism.
The Skeptical Utopian
As Dr. Daan Kolkman, a research fellow in computational sociology at the Technical University of Eindhoven, puts it:
Yes, it’s certainly worth scrutinising the data and methodologies behind automated systems. Yes, tools for explainable algorithms are a welcome addition to our toolkit. We need to think about the fairness, accountability, and transparency of algorithms. Our efforts should not end there. We need to think carefully about how we can create critical audiences for the millions of algorithms that impact our daily life.
Many speakers at NEXT over the years have suggested that the original sin of the internet was the techno-Utopianism that ruled its early days missed the fact that, when you connected everyone, you connect bad people as well as good people.
Let’s not let the second wave of digital evolution fall to the same error.
Leashing the algorithms
We are building the tools that will allow us to unleash and use the vast stores of data we’re creating. They genuinely have the potential to transform life for the better. Look at the news yesterday that Microsoft’s AI team has had great success in labelling photographs. That has potential to transform the blind and the partially sighted’s experience of the web.
But let’s test those tools against their potential harm, against our own biases, and even just against applicability. Just because we’re building these powerful tools, doesn’t mean they can solve every problem.
A pneumatic drill is an incredibly powerful tool — but I doubt anyone would want their ears pierced by it.