Capability overhang, or the future of AI

The future of AI depends on what we imagine today. While we explore our new capabilities, imagination is the bottleneck.

When a new technology arrives, we’re excited because we don’t yet know what to do with it. First, we try to use it as a new tool for the same old work. But what else could we do? Can we do something that wasn’t possible before? There’s a phrase for this phenomenon: capability overhang. When ChatGPT arrived, people started using this phrase about the emergent technology. What can it teach us about the future of AI?

In a recent post, Tim Lee and Jean Trott took the effort to explain large language models (LLMs) – the inner workings of tools like ChatGPT – in plain language. Bottom line: we don’t know (yet) how exactly LLMs work their magic, and we may never fully know.

However, this doesn’t leave us with a permanent capability overhang. At some point, we may figure out what we can do with our brand-new toys. Interestingly enough, the tools themselves can help us with that. What ChatGPT and its breed have achieved is accessibility to a mainstream audience. This is an important step for every technology, new or not-so-new.

As soon as huge numbers of people can play with these toys, new use cases will emerge. Gradually, we’ll see the capability overhang decreasing, and the future of AI materialising. Will it keep up with the current hype around generative AI? It’s hard to know in advance. The fate of formerly hyped technologies – like blockchain, web3, or the metaverse – gives us cause to be sceptical.

You never know in advance

But the truth is: you never know in advance. Nobody knew that the PC, the web, or mobile would take off like they did. You could expect it, hope for it, or fear it, but time was needed to solve the capability overhang first. What to do with your IBM PC or Macintosh, your AOL or T-Online account, and your iPhone?

Matt Webb, who will speak at NEXT23 on September 21, recommends organisations adopt an approach of systemic, strategic wayfinding through prototyping and sharing. In a capability overhang, we’re constrained more by our imagination than by technology. So, we need to make space for creative, playful experimentation.

As soon as the first wave of enthusiasm – “I’m already in, that’s easy” – wears off, the quest for value starts. Capabilities are useless if nobody gets value out of them. Use them or lose them, we can’t afford to have capabilities we don’t use. Our AI capabilities are expensive, adding to the pressure. Thus, the first road travelled is always automation – replacing expensive human labour with machines.

AI is coming for our jobs. At least, that’s what we fear – and we may be right to do so. On the other leg, we’ll see the productisation of AI capabilities. People will turn those into new products. Or new features of existing products. Probably both. The internet with its massive global reach and mature technology stack is a driving force of productisation. The tremendous cost of training and running LLMs is a limiting factor.

The price of AI

Through APIs, AI capabilities are accessible to many, and scalable to a certain degree. But, again, this comes at a price:

While we don’t know the true numbers, we’ve heard from reputable sources that the supply of compute is so constrained, demand outstrips it by a factor of 10(!) So we think it’s fair to say that, right now, access to compute resources — at the lowest total cost — has become a determining factor for the success of AI companies.

In fact, we’ve seen many companies spend more than 80% of their total capital raised on compute resources!

The silver lining is that at some point this is probably going to change:

The training cost of an LLM may look like a moat today, but open source models such as Alpaca or Stable Diffusion have shown that these markets are still early and may change quickly. Over time, the cost structure of the emerging AI software stack (see our previous post) may start looking more like the traditional software industry. 

Ultimately, this would be a good thing: History has shown that this leads to vibrant ecosystems with rapid innovation and lots of opportunities for entrepreneurial founders.

Our imagination will grow

So, we’re facing two major constraints: costs and imagination. While we can expect costs to decline at some point, our imagination will grow as we get to know and master our new tools. We’re at what Matt calls his

favourite point in the technology S-curve: we’re imagination bottlenecked.

Like, for the past 10 years, you mainly had to figure out the business-efficient path to get to wherever it is you wanted to go. Which means strategy decks, and post-its, and tests and iterations, and defined team roles. Which I’ve done my fair share of.

But now!

At this point in the S-curve the way you figure out what to do is that you get your hands dirty and make things, and you try stuff out in your sweaty palm which tells you more, and it all changes daily, and you talk widely and share widely to (a) make sure you’re not doing anything stupid and/or dangerous, and (b) generate more ideas in the scenius.

This is exciting, and it reminds me of the mid-1990s (for the web) or the late-2000s (for mobile and social). The future of AI is wide open, and it depends a lot on what we imagine right now.

On a side note, Matt also has some advice for agencies: he thinks “tech and also agency work has changed too much to rely on the processes of the past.” We need a new playbook. If past experience is still valid, I would expect a new breed of small, nimble agencies to come up with a new model. This would then spread to other agencies, and eventually their clients, the big incumbents.

Big corporations always hire agencies to do what they cannot do themselves. Over time, those agencies grow and become almost as sclerotic as their clients, thus making room for a new breed of young creative shops.

Why should it be any different this time?

Photo on Pixabay