Exploring the frontier of new interfaces

Where is AI going next? How far and fast with biocomputing develop? And what will the next wave of interfaces look like? A panel discussion from NEXT24.

A panel discussion from NEXT24, looking at new interfaces, embodied AI — and what the future holds.

Panel


These are live-blogged notes from a session at NEXT24 in Hamburg. As such, they are prone to error, inaccuracy and lamentable crimes against grammar and syntax. Posts will be improved in the coming days and this note updated.


What’s coming in the next five years?

Timmy finds 18 months planning hard, let alone five years, and moving from corporate life, with its deep pockets and long time scales, to startup life is challenging. You can teach a four-year-old child to fold t-shirts, but you wouldn’t let them out in Hamburg — the amount of sensor input they have to process to be safe is insane. The same is true with many robots. You build them in a sandbox.

To navigate the world, we have to recall some core memories, and ignore others. That’s what we do. It’s hard to say how long that will take robots. Moving from the lab to the real world is the hardest step.

Maryam says that most of the work happening in brain interfaces is still in controlled environments — mainly labs. In the lab, we use four or five inputs. Outside the lab, you can have hundreds, or thousands. She can’t see us solving that in the next five years.

MRI gives us perfect vision of brain activity. If you want quick feedback on brain response, you really need implanted electrodes in the brain tissue, or something right on the surface of the skin. Some of the caps they use at the moment cost €30k to €40k. That will come down in the coming years.

What are the technical challenges ahead?

Andy’s work fits into the biocomputing computing sub-strands. They’ve pivoted towards drug discovery for funding reasons — but it’s a very, very exciting area. Our brains are intrinsically learning: there’s no real difference between thinking and learning. Building systems like that will be a real challenge for the next few years. At one end of the scale, you’ve got Cortical Labs with its neural tissue and electrodes. And the other end is CPUs and GPUs. But it’s possible to imagine hybrids.

Maryam thinks that within the embodied virtual agents domain there’s plenty of progress. LLMs have made them much more efficient in their conversations. But when we embody them, we raise expectations. They have to be human-like, or we find them eerie. However, LLMs will enhance them, and we’ll see more and more support being done by embodied agents: agents with avatars.

They need to have perceptional awareness, says Timmy, to give an answer based on context and perception. A lot of LLMs are just trained on the end results: the whole web is end results. There’s not a lot of data on reasoning. The moat over the next five years is reasoning data — how people got to their end results.

Look at Minecraft: it’s a complex game that requires reasoning and planning. Kids are good at both tactics and strategy at the same time, but AIs are not. We humans have great mechanisms for filtering what to remember and what to forget. Dreams are essentially us cleaning our sensors, and keeping what we need.

The future of AI models

There’s something very intriguing going on, says Andy. The Internet is already being polluted by AI slop, so OpenAI’s reasoning model is being built with reinforcement learning. They’re not relying on token fossil fuels, which is a very interesting development.

When we drive in a familiar place, we use our existing mental models. When you arrive in a new city, you adapt, says Timmy. That’s the next step for AI, adapting models to new circumstances.

Q&A

Where do Andy’s brain cells come from? They’re grown from skin cells, via a complex process. The early work was done on well-known cell lines. Then they used donated cells from people around Cortical Labs. Human-derived cells were better at playing Pong than rat-derived cells…

Are there more types of brains than we think? Maryam points out that the storage of neural data could well be a big ethical issue. Could that be used to diagnose Alzheimer’s risk for example? But in general, the answer is yes. The brain cells themselves are very similar, but the synapses, the connections between them, are very individual. Each new brain looks very different to their BCIs, and so the systems need to be calibrated for them.

There’s a lot we still don’t know about how the brain works. There hasn’t been much research since the 70s on scents and personality, points out Timmy. Studying that is super interesting because you see the nuances. The human in the loop means it can always fuck up. What we want AI to achieve is collective wisdom of the right answer. But is there a right answer?

What’ll happen in 50 years?

Maryam thinks in 50 years we will have true cyborgs. Technically, a war veteran with prosthetics already is one. Timmy thinks we’ll have crashed everything in 50 years and nature will take over — but Andy thinks we’ll keep on building.