Apple Intelligence: when AI becomes personal assistive intelligence

Apple’s AI play is here — and it looks very different, and a lot more personal, than what others are doing.

Apple is behind. Apple has missed the boat. The AI revolution is here. And Apple has been left behind. That’s been the drumbeat for the past 18 months or so. Well, no longer.

Innovation is a funny word in the Apple context. Those who have been following the company for a long time know that it is rarely first to the market with anything. MP3 players, smartphones, tablets, and smartwatches were all done before Apple launched theirs — but Apple pushed them into the mainstream. One day, Apple will lose that knack. But this might not be that day. It has two big new bets on the table: the Vision Pro and, now, its AI play.

The narrative for the last 18 months has been that Apple is behind on AI. The company could — justifiably — argue that it’s been building in the capacity for AI for years, with its neural engines. And that it’s been deploying selective machine learning throughout its ecosystem. But now, a big change is coming, called (with characteristic self-confidence) Apple Intelligence.

Solving the AI challenge

At WWDC this week, Apple introduced this next step, over 40 minutes of their keynote video. But will it be the one to solve the two greatest challenges of AI? They are:

  • The interface problems. AI is a solution searching for a problem. For the past couple of years, it’s been defined by an inscrutable chat box where you have to slowly learn the magic incantations that will give you the results you want. Sure, it’s nowhere near as challenging as the old command line was, but it’s in the ballpark.
  • The hallucination concerns. It is in the very nature of generative AI to make things up: the clue is in the name. It’s replicating patterns it finds in its training material — and occasionally, it follows the pattern of fake information.

The AI Interface Problem

This is the one area where you’d expect Apple to excel — and, at first glance, it has done so. Rather than presenting one of these inscrutable chat windows most of the time, the Apple Intelligence features become another contextual option in apps. Here’s an example:

Apple Intelligence in use in an email, showing an interface for AI features.

The mix of direct suggestions and a freeform window is a much more straightforward way of accessing AI features. It’s very Apple-y in execution and context. And the app, Image Playground, feels like a first pass at making image generation with an actual user interface, for those of us frustrated with prompt-crafting via Discord when using Midjourney.

Apple’s new AI-based Image Playground in use.

Apple’s approach to situational tools with a specific focus feels very much like the Assistive Intelligence idea we’ve been exploring — particularly NEXT 2023 speaker Matt Webb’s experiments. (Matt has thoughts on what Apple announced, incidentally.)

Once again, Apple is not the first to this idea. Microsoft rebranded its whole AI offering under the Copilot name, and is also working on integrating it into its system. Who will do it better?

The AI Hallucination problem

Every major AI company has been burnt by the hallucination problem — with Google just the latest. Has Apple found a way around this? Well, sort of. Apple isn’t focusing on generalised AI models at all — it’s focusing on highly personalised models. They’re creating their own models that use your personal data to inform the suggestions that they make. And they’re either doing all the processing on your local devices — or booting them off to their private Apple Silicon-based cloud servers when more processing power is needed.

If it’s based on personal data in a constrained way, there’s far less chance of hallucinations.

That said, Apple is also shipping a ChatGPT integration. With your permission, your device can hand some tasks off to OpenAI’s service (in what Apple claims will be a privacy-protecting way), and that could lead to hallucinations. Apple CEO Tim Cook has admitted as much in an interview, although he claims that they are working to minimise this possibility. They’re worried enough that any use of ChatGPT comes with a standard warning, based on the demos during the WWDC keynote.

The ChatGPT warning on screen during Apple’s keynote.
The ChatGPT warning on screen during Apple’s keynote.

A new way forward in AI

So, what Apple has actually presented us with is an interesting vision of the future of AI: not as a central monolithic service we all head to, but as a set of models constantly working on the data we’ve accumulated on our proliferating number of digital devices. Waaaay back in 2011, our NEXT Conference motto was Data Love. And that looks unusually prescient, even for us. We’ll love our personal data all the more, now AI can turn its attention other making sense of it — and to building new things from it.

Innovation? Perhaps. But it’s certainly a useful refinement of what AI could mean in our personal lives.