AI is flavour of the month, year and quite possibly the decade. As Martin has outlined, it’s become central to the plans of many of the major technology vendors over the last year. AI is both trendy and, clearly, effective.
But what does that mean for us?
To understand how AI can makes our lives better — and our products more compelling — we need to understand what it actually brings to the table. In other words, what is AI in the sense we’re using it here. It’s certainly not (yet?) the form of conversational intelligence we’ve seen in so many sci-fi movies.
The reality of AI is different – but no less exciting. In essence, what AI does is move us from reactive computing to proactive computing.
From Reactive to Proactive
In the reactive computing era, we had a task we wanted achieved, and we asked the computer to do it, via the interface. The computer would then execute the task via a series of commands pre-determined by the programmers.
AI moves that forward, because the computer starts making decisions for us. Here are two examples:
Adobe has its Sensei machine learning platform, which was integrated into all versions of Lightroom this week. It’s an intelligent image improver based on analysing well-edited photographs with machine learning, and being able to determine patterns that can then be applied to the image in question. In essence, you ask the computer to make it better, and the computer decides how to execute that task. It gets you closer to the finish line – the final photo – before the human has to step in. Pure labour saving at its best. Adobe is bring this kind of thinking to video as well.
Google has RankBrain – its machine learning element in the search algorithm, which means that oftentimes there’s no human rationale behind search results. RankBrain has determined, through machine learning, that this set of results are highly likely to match this particular search query.
Telling us things before we realised we wanted to know them
Taking it forward another level, we’re already seeing companies surfacing things for us, without our direct intervention. Apple’s been applying machine learning to photos for a while. It’s easy to forget that we don’t have to manually comb through images – we can just ask Siri. I was able to find a particular photo of my wife that she wanted a copy of very quickly just by asking Siri to show me photos of her in summer 2005. More than that, though, that same machine learning is being used to proactively generate memory videos for me, without me requesting them. And it’s also being used to surface news stories for me as part of the Siri element on the lock screen – something that I heard from multiple publishers is leading to significant levels of traffic.
What do all of these things have in common? They’re sorting huge volumes of data in a way that makes that data more useful to humans. We’ve combed through all this news for you, and based on your previous actions, these are the stories you want to read. We’ve analysed all your photos for you, so here’s the best ones.
In essence, AI is both the answer to the problems of the digital age – and a step forwards for that age.
Dealing with the data hoard
The last few decades of digital development have seen massive, almost exponential growth in the amount of data we produce. We’re creating data about where we are, what we do, and the people we’re connected to. We’re creating imagery and video at an unprecedented rate. We are in danger of being overwhelmed by the data we produce.
That’s where AI comes in. Much as Google came to prominence by sifting the mass of published information available to us, and giving us a user-friendly interface for finding what we want to find, machine learning is now creating meaning out of the vast swathes of data we generate. And AI can then make that meaning useful.
This is being done at huge scale. Microsoft is committing $50m to use AI in climate research, for example. It can operate at your scale, too.
How could this add to your product:
- Could an AI suggest the best parking spot for you near your destination, rather than forcing you to search for parking?
- Could you surface data from your product’s analytics proactively, rather than the user going in to find it?
- Can you analyse use patterns, to find ways of reconfiguring your product around time of day, week or year?
In short, how can you use these tools to help the user, not merely obey them? We’re edging to the point where these devices we all carry around might actually be able to claim the name they had in the pre-smartphone era: personal digital assistants. Like the best human assistants, they don’t just wait for instructions, they proactively go ahead and do the things that will make their employers live better lives.
How do you go AI-first?
If you want to be AI first, without spending a fortune on AI skills that might not be readily available, you have to do two things:
Have the skills to hook into the ML layers being built right into operating systems for you. For example, Apple has CoreML and Google is integrating TensorFlow, both of which allow you to tap into their machine learning expertise, operating at a chip level. In their fierce battle for competitive advantage, they’re providing us with better tools for their platforms.
Think about how your product can proactively help people – suggest answers to questions that hadn’t thought of – yet. Or that makes their data available to them in more useful ways, to enhance their lives.
That’s the other meaning of AI first – the AI solves your problem before you realise you have it. And that’s truly a step forwards for digital.