NEXT23: AI as Assistive Intelligence

In our first exploration of the emergent themes of NEXT23, we look at the AI trailblazers, who have been exploring the intersection of creativity and AI for years.

It’s unusual to have a NEXT conference dominated by one technology — and even more unusual for that technology to be, at first glance, only obliquely related to the core theme of the conference. But that’s one of the core stories of NEXT23: we just couldn’t avoid the suddenly ubiquitous technology of 2023 — AI.

Unlike the technologies that have caught people’s imaginations in recent years — like blockchain or the metaverse — the examples were practical, and involved people who had worked with AI over long periods of time. Wait: how could people have been using a new technology for long periods of time?

Well, it’s worth remembering that machine-learning-derived AI is not new: the big change over the last year has been in both the quality and, importantly, the accessibility of the models. Apple and Google have been building AI into their products for years, doing things like photo recognition. But putting an accessible, chat-based front end on a model, and allowing people in general to build new things with them, has been transformative.

AI inspiring art

And many of the examples came more from the realm of art than the conventional technologists. And from those creators, we got some pointers from those who’ve been using these models before they became so accessible. Take, for example, Harry Yeff. He’s been exploring what AI can do artistically for years now. In his talk, he gave us multiple examples. One was his Second Self, an AI trained on his own voice to duet with. The Second Self model is the more interesting one: he used AI in an oppositional way. It forced him to get better, so his work stayed ahead of the AI. He wasn’t replaced by his AI; he used it to make himself better.

And then, there was his recent collaboration with the Leipzig Opera House, where AI took the role of creative inspiration and mentor. As he put it:

People don’t embrace these emerging AI tools because they are afraid of their replacement; the loss of human creative.

His was an exercise in using AI to enhance the human creative process, not supplant it. This was a theme taken up by our host, LJ Rich, as she explored how AI can act as a creative co-pilot in music.

What’s a human’s training set?

But she also suggested a second lesson we can learn from AI. These generative AI systems improve dramatically as their training sets improve. The more diversity of the information fed into them, the wider range of outputs they can produce. They still can’t be truly creative, in the true, inspired sense. But they can produce more compelling work, with more training data.

Could that hold true for us, as well? LJ Rich’s point was that here sense of music — and ours — is defined by the music we’ve been exposed to. It is, if you like, our own personal training set. If creativity is what sets us apart from AI, which is, in essence, an imitation machine, then we would be well advised to broaden our own training sets.

Perhaps the time savings that AI could offer should be redeployed in expanding what we read, listen to and watch.

Team AI

That said, Matt Webb was the standard-bearer for the more traditional school of techie. But even he talked about “software sketches”, playing with ideas in prototypes to think through how AI could be used in our day-to-day lives. His final example is of a clustered group of AI co-workers, who specialise in particular skills or interactions.

One of his sketches is an AI that will paint in stars, but not other shapes – but is always looking for stars to paint. Another is a poet. Another likes drawing spaces. Suddenly, you have AI teammates.

He also introduced us to the idea of a “capability overhang”: we have a new technology whose potential we’ve only just scratched. And, it suggests, it will take us years to figure out how best to use it, just as the web has developed over 20 years.

The Double Capability Overhang

But what intrigued me was the sense of a double capability overhang: by exploiting the latent capabilities within AI, we both challenge ourselves to produce different forms of art, but also just to produce better art, to out-compete the machine.

These are the sorts of issues explored within the narrow niche of fiction, in a recent WIRED piece, Confessions of a Viral AI Writer:

A thought experiment occurred to me at some point, a way to disentangle AI’s creative potential from its commercial potential: What if a band of diverse, anti-capitalist writers and developers got together and created their own language model, trained only on words provided with the explicit consent of the authors for the sole purpose of using the model as a creative tool?

She goes on to dismiss the idea as prohibitively expensive, but would it be? And that, of course, depends. Currently, the AI companies are providing us with the means to use their models. But what if someone moves up the value chain, and provides user-friendly tools for creating our own models? To do what we’ve seen people like Harry Yeff do?

What art could humanity create then?

AI is in its infancy. And our reflexive fear is useful, but shouldn’t close our minds to potential. Yet, we might need to acknowledge that we will have to fight to see that potential realised.