Assistive Intelligence will shape our future, but we need to think about AI ethics first

Almost without us noticing, AI is permeating our lives, our businesses and our nations. It would be foolish to ignore it — but equally foolish not to ask some serious questions about the ethics of what we're doing.


Watch the complete episode


AI isn’t some distant, far off goal. Artificial intelligence exists, and you probably have it in your pocket. Increasingly, our phones are little engines of machine learning, creating useful insights from the growing volumes of data we own.

Look, for example, at Apple’s announcement of LiveType — a machine-learning based technology that will examine all your photos, and make any text within them both searchable and copyable. We don’t have to ask our computer to perform OCR, it’s just doing it for us.

It’s no longer an obedient slave, it’s now a proactive digital assistant. And that should change how we think about our devices.

The PDA, reborn for the AI age

Those of you with long enough memories for tech might remember the short-lived era of the PDA. The personal digital assistant — devices like the Palm Pilot or the Handspring Visor — were small, hand-held computers in the early 2000s. They had a brief period in the sun, before the smartphone juggernaut crushed them and left their shattered remains on the road.

It’s time to reclaim that name, though, but not for a device, but for AI because it is slowly turning our devices into true personal digital assistants. Our computers are increasingly devices that don’t just react to our commands, they proactively look for ways to help us.

Apple’s on-device machine learning is, comparatively, baby-level stuff. Impressive though it is, it’s all happening on your device, using the neural engine built into Apple’s chips for the last few years. Take that sort of utility, and expand it into the corporate realm, or the governmental one, powered by farms of machine learning-capable servers. What incredible potential there must be in there to extract useful insights from vast data sets proactively.

The Corporate Digital Assistant

Let’s welcome the CDA — the corporate digital assistant — that is combing the ever-increasing data every organisation creates to extract genuine insight and intelligence. So, should we all be hiring a bunch of machine learning experts? Well, no. AI is in the process of being productised. So-called “no code” or “low code” systems are emerging that allow non-specialist companies to start applying AI to their businesses. Cloud AI is a new frontier for businesses. All the usual suspects — Amazon Web Services, Microsoft Azure and Google Cloud — will happily sell you products.

This is good news — but something we should treat with caution. As the Future Today Institute’s annual tech trends report highlighted, most AI is still closed source. The use of AI-as-a-service will require us to hand large amounts of data to third parties. If the last decade has taught us anything, it’s that if we’re going to do that, we need to start thinking about the ethics of AI now, before the problems start happening.

Exploring the ethics of AI

For the last few years, Future Today Institute founder Amy Webb has been writing and thinking about how to address this. Her focus has often been less on the technology, and more on the motivations of the people creating it:

Knowing that we cannot possibly write a set of strict commandments to follow, should we, instead, focus our attention on the humans building the systems? These people–A.I.’s tribes–should be asking themselves uncomfortable questions…

And she has a good list of exactly the kind of uncomfortable questions that they need to be asking. But we all need to be asking ourselves some serious questions about the future of this technology, as individuals, as corporates, and as nations. And those discussions are already underway. And the FTI report points out:

A new wave of AI nationalism is rising as governments institute new restrictions on M&A and investment activity.

And:

Future wars will be fought in code, using data and algorithms as powerful weapons.

Forget gadgets, assistive intelligence is the future

For the last couple of decade or so, we’ve been very focused on hardware, and the phone in particular. Too much tech coverage focuses on the question of what will be the next big gadget.

That’s very much the wrong question.

The innovations of the next decade are likely to be driven by software, and the explosive growth promised by AI, not by some successor to the smartphone.

The 2000s were the decade of connection as the internet hit the mainstream. The 2010s were the era of data collection, one we heralded way back in 2011 when our conference theme was Data Love. Facilitated by social networks, and the sensors in our smartphones, humanity generated unprecedented volumes of data. Because of that, the 2020s are likely to be the decade of AI. It’s the decade where the data collected by the networks goes to work, thanks to AI.

AI = Alien Intelligence

The problem is that our concept of AI is still rooted in the idea of a Terminator-like robot. AI won’t create intelligence like ours, it’ll create something quite different. Amy Webb again:

I’ve never liked the term “artificial intelligence.” I think it should be viewed more like alien intelligence—an intelligence unlike ours. This is why I think the tests we build should be situational. If we’re building a system that’s intended to help us out at work, then I suggest that the test should be a meaningful contribution test.

And that’s a good lens with which we can view AI. What meaningful contribution is this machine-based intelligence making to our goals, lives, and ambitions? But we also need to ask: what price do we pay for that? In the tension between those two questions, we can start to navigate our way to an AI-assisted future. And maybe we can do so without repeating the mistakes of the last decades.

For all the controversy about, say, Facebook’s use of data to target people, the potential of AI to unlock ever bigger sets of data is vast. It will open it up in meaningful, useful and actionable ways that will both increase our hunger for data, and create an incredible potential for good — and harm.


Amy Webb is our guest on the next episode of What’s Next.

Photo by Alina Grubnyak on Unsplash