End of the Screen: Exploring New Interfaces
What happens when our digital interfaces move into our glasses – or our clothes? A panel discussion at NEXT 25 sought to explore just that.
Screen-based interfaces have dominated digital interaction. But that dominance is cracking. What happens when we can interact with our voice? What happens when machine vision is in glasses on our face, than can talk to us? And what happens when our clothing becomes smart? A panel at NEXT25 made up of innovators in the wearables space sought answers to those questions.
Panel:
These are live-blogged notes from a session at NEXT25 in Hamburg. Posts will be improved over the next few days.

Mario: Once I was diagnosed, six years ago, I started reading about my condition. I couldn’t understand social cues. I had no idea when to shut up. What could I do about this? Mixed reality glasses seemed the perfect solution. We started capturing facial expressions and translating them into emotions, so people could understand the reactions they were seeing in front of them. This is a neurodivergent solution for a neurodivergent problem.
The NUL system is open source. There’s a Discord channel where users chat, and share what they’ve discovered and written for it.
Iara: We are building an app for smart glasses: Ikkio. We’ve focused on the blind because they have an immediate need. There’s a blind developer on the team, who helps us keep connected with the community.
She demonstrates by putting on a pair of glasses, and asking them what’s in front of her. It describes an audience facing her in an auditorium.

This is for a blind person going into a supermarket, looking at products of identical size and shape. We are building an AI system that’s initially for blind people, but which could be useful to everyone eventually. But we need to focus somewhere to start with. The voice output is important — a human-like voice can make a big difference.

Anouk: I make smart dresses. I like to see them as exoskeletons. They collect and process data from the space around you. My robotic spider dress has spatial sensors that allow the dress to react and “protect” the user. A smoke dress detects the number of people around it, and puts out more smoke based on that. A cocktail dress that… makes cocktails. It’s fashion as an interface: social, playful and emotional. We’ve collaborated with Audi, to use automotive sensors. Stage wear is another area of exploration.
We’re also exploring prosthetics. We’ve built a “smoke leg” and legs that can make music, for example. There’s lots to explore here.
I’m a big supporter of open source — making cool things and sharing them with the community. This is an example of an open-source dress you can make yourself:

The dress is a collaboration with the multimedia artist Izo, who was modelling it on stage.
The impact of AI on new interfaces
Iara: It’s having a huge impact on coding. I think it will become more important to focus on what people want, rather than just sitting and coding for ages. Vibe coding makes it much easier to prototype so quickly and easily, and validate the idea. It’s a real accelerator.

Mario: Our system is generating continuous contextual data, the next step towards AGI, which scares people. We’ve been talking to Google about it.
Anouk: When I started 20 years ago, it was using a really big computer on the back of the wearer. Now it’s an into chip. There are new opportunities emerging at the nexus of tech and art, driven by advancements in technology, including AI. That means new opportunities to experiment.