Thomas Bendig: making user interfaces more human
Humans use multiple channels to communicate with others - but we only use one channel at a time when we communicate with devices. It's time to change that.
WARNING: Liveblogging – prone to error and inaccuracy. Will be updated/improved over the next 48 hours.
Thomas Bendig is CEO of Fraunhofer ICT-Group.
We need new user interfaces because we constantly add new functionality to our phones. In seven years the iPhone has become one of the smallest phones on the market, and device size is growing on other phones. But, in most cases, bigger screens just leads to bigger buttons – or more of them. What we need is simplicity and intuitive user experience.
People like simplicity because they don’t want to have to learn how to use devices. In many cases, they want to focus their efforts and cognitive energy on the situation, not on the device itself.
Human communication is efficient – we use a lot of parallel information channels – gestures, speaking, gaze and facial expressions, for example. But what’s simple for us is hard for computers. Device are becoming more and more powerful. In the past the CPU has been the limiting factor on new services. But as devices get connected to the cloud, we have nearly unlimited computing power – so the limiting factor becomes the user interface.
The wrong direction
One reason is that we design the functions, and then just wrap the user-interface around it. This is bad – for the user, the UI is the app. You should develop the UI with the functionality – and with the customer.
We have multiple input channels for computers, but they’re all used separately and optimised for specific types. The user has to chose between them.
So, start with the user in mind, not the functionality.
For example, imagine dictating an article to a computer. The phone rings, and you turn away to answer it. The computer recognises that you are no longer looking at it, and stops dictation, and then starts again when you look back. If we can use a combination of channels, we get much better experiences.
One you have context sensitivity, you can start reducing complexity by adapting the user interface to the situation. If the device understands both you and your situation, it can show you only what’s most useful to you.
Beyond that, we can move from controlling one device at a time, to having a smart assistant or gateway which can then talk to multiple devices to get the job done.
Siri is just the start of interacting with our devices by speech. We could start seeing more faces on our devices, interacting with them like we do humans.