A new interface paradigm

Computing is expanding in two dimensions. Will conversational AI lead to a new interface paradigm? And what does that mean?

From its humble beginnings, the computer has been a universal machine – designed to compute everything that’s computable. However, using such a machine poses the question of how to tell it what to do. This is the question of the interface, and the corresponding paradigm. It’s remarkable how little has changed over the decades. Are we on the cusp of a new interface paradigm?

Recent breakthroughs in conversational AI have led to this suspicion. UI expert Jakob Nielsen touts “the third user-interface paradigm in computing history”. After batch processing (remember punched cards?) and command-based interaction (everything from command lines and terminals to graphical user interfaces), we now enter the next paradigm: intent-based outcome specification, in Nielsen’s parlance.

With the new AI systems, the user no longer tells the computer what to do. Rather, the user tells the computer what outcome they want.

This is not so much about any particular interface manifestation – like, for example, chatbots. It’s more about the underlying principles – the paradigm. Jakob Nielsen is pretty clear that current chatbot-like interfaces have severe drawbacks and aren’t even close to the quality we’ll have in only a few years. To put it another way, this means the great interface shift (cf. Accenture Life Trends 2024) is yet to come.

Under the command-based interaction paradigm, users have to master the system, learning commands that the computer then executes. Slick, intuitive interfaces don’t change this – they only make it easier for the user. The user is in full control. Under the intent-based paradigm, control shifts to the computer: the machine figures out what to do, and then does it.

The expansion of computing

Command-based interaction has brought us a long way. Over the past decades, we’ve computerised – or digitised – almost every aspect of our individual and collective lives. Eventually, everything a computer can do will be done by a computer. Recent advances in machine learning, known colloquially but slightly inaccurately as AI, have increased the level of autonomy these systems have – or at least seem to have.

This has enabled computers to do tasks we thought only human beings could do. But that isn’t an entirely new phenomenon. It’s the flip side of Tesler’s Theorem:

“AI is whatever hasn’t been done yet.”

What computers can do is continuously expanding, but this comes at a price. Human beings have given up control over parts of our systems, or rather entrusted these systems with control tasks computers are better at. Now, control shifts further. Machine learning has given us infinite interns. They can do a lot that, in the past, only human beings could do.

What’s computable has been expanded, and continues to do so. This scope expansion poses serious questions. Do we want to model our world so that it’s completely computable? Do we want to live as machines? Who is in control – the user or the system?

Jakob Nielsen expects that the second paradigm, and graphical user interfaces in particular, will survive and not be replaced by the third paradigm:

Future AI systems will likely have a hybrid user interface that combines elements of both intent-based and command-based interfaces while still retaining many GUI elements.

How exactly this plays out remains to be seen. But we can safely anticipate more interface innovation, be it under the old or the new paradigm, or a combination of both. Make sure to check this post from 2023, written before Apple announced its newest product, to get a glimpse of what may be coming in speech, AR/VR, or even brain interfaces.

The second axis

So far, we’ve been talking mainly about software. But there is also imminent change on the hardware side, with the Apple Vision Pro launching early next month. From what we can tell so far, this device still sticks to the old, command-based paradigm. But it extends computing into the physical space around users and their devices.

We can see this as the second axis of expansion. On the first axis, computing is expanding into the realm of human capabilities. On the second axis, computing is expanding into the physical space.

The first axis is about intelligence, and it doesn’t really matter whether we add “artificial” or even “artificial general” as a prefix. Remember, AI is whatever hasn’t been done yet. As soon as it’s done, it just becomes software.

The second axis is about physical space, and that is another large field. Smart devices, the internet of things, cars (self-driving or not), and now the Apple Vision Pro could pave the way to all kinds of new spatial computing experiences.

And then, there are already products that combine advances on both axes. This was the theme of this year’s CES. AI is coming for your hardware. New kinds of devices like the Rabbit R1 are appearing. Car makers are adding intelligence to their products.

The car is already a computer on wheels. New business models are slowly emerging, but the redefinition of the car is still work in progress – something we wrote about in 2018:

The digital car will be defined differently, as for example the launch of Byton at CES last week illustrated. It’s about the user experience, the user interface, the voice controls, the sensors, the network and its corresponding network effects, the platform model and the AI systems, maybe even VR/AR/MR and the immersive experience a car ride will provide.

The autonomous car is the culmination of the new, intent-based paradigm (since it drives itself), and of spatial computing (since it moves around). On both axes, things are moving fast. No wonder the user experience and the user interface (UX/UI) are not yet ready.

Both the entry into a new interface paradigm and the expansion of computing will bring a lot of changes. We don’t know where we’ll be a few years from now, or even forty years from now – the time that has passed since Steve Jobs unveiled the Macintosh. The interface paradigm that still governs the Mac today has been remarkably stable. Will we be able to say the same about the new paradigm?