Brains, robots, and interfaces

The brain is energy-hungry. But because it uses electrical signals, we can develop digital interfaces – even for human-robot interaction.

What we call mental energy isn’t physical energy in the strict sense of the word. However, the brain still runs on energy provided by adenosine triphosphate (ATP). It is an energy-hungry organ, using about 20 to 25% of our body’s energy. The brain is processing and transmitting information through electrical signals – and that requires lots of energy. It also makes brain-computer interfaces (BCIs) possible.

If we can measure electrical signals, we can also process them. This is, in principle, the gateway between human brains and computers. There are, of course, several prerequisites. First, we need to solve the measurement problem. There are two major possible ways:

  • non-invasive, such as electroencephalography (EEG)
  • invasive, such as brain implants.

The non-invasive measurement has the obvious benefit of avoiding surgery but also has its drawbacks in a lower signal-to-noise ratio. It requires hardware for amplification and sophisticated signal-processing software. Brain implants, like the Neuralink implant, and other invasive technologies offer better accuracy of measurements, at the price of surgery.

The second issue is signal processing. If we can measure brain activity, what does it mean? How does it relate to the user’s intent? Processing starts by identifying features of brain activities that are then classified into patterns and finally translated into commands. This way, users can move a cursor, a wheelchair, an exoskeleton or a robotic arm. Brain-computer interaction also works the other way around: brain-computer interfaces can stimulate neural activity.

Last year, Nature Electronics declared brain-computer interfaces their technology of the year. These interfaces can now translate neural signals into speech, at speeds close to normal conversation. They use a recurrent neural network and a language model – things we are used to calling machine learning and artificial intelligence (AI).

Brain interfaces for virtual and physical reality

On the hardware side, progress is made with wireless communication implants, and some researchers see the future in optical brain-machine interfaces. Most of this is still in the medical realm. BCIs can help impaired patients regain mobility, or communicate with the world. Stroke patients can recover motor functions and improve their quality of life.

In addition to medical applications, BCIs are now also being integrated into technologies such as virtual reality (VR) and games. Users can navigate virtual spaces (the thing we briefly called the metaverse) or interact with digital objects through their thoughts. With BCI-integrated physical robots, users can also move through the physical space, while seeing through the robot’s eyes. As with cyborgs, the boundaries between humans and machines are blurred.

Closer to practicality are possible applications in the field of mental state monitoring. Cars already have fatigue sensors today. In 2015, Jaguar Land Rover made headlines with its Mind Sense research, aiming to monitor the driver’s mental state through brain-computer interfaces:

Jaguar Land Rover is investigating a method already used by NASA to develop a pilot’s concentration skills and also by the US bobsleigh team to enhance concentration and focus.

This detects brainwaves through the hands via sensors embedded in the steering wheel. Because the sensing is taking place further away from the driver’s head, software is used to amplify the signal and filter out the pure brainwave from any background ‘noise’. Jaguar Land Rover is currently conducting user trials to collect more information on the different brainwaves identified through the steering wheel sensors and will involve leading neuroscientists in the project to verify the results.

This is an example of a passive BCI, as opposed to an active or reactive one. The user does not have to become active himself, nor is he stimulated externally (reactive). Passive BCIs are useful for human-robot collaboration and other forms of interaction. This is the field of Maryam Alimardani, a speaker at NEXT24 this September.

How robots can behave better

In this context, passive BCIs intend to address the apparent shortcomings of robots in their interaction with human beings: without these interfaces, robots don’t know anything about the emotional state of the humans they interact with. In a work environment, are they overworked? Do they pay attention? Are they anxious? Do they make mistakes or errors?

If robots know at least something about human emotions, workload, attention levels or errors, they can behave in ways that are more useful and relatable to humans. Of course, AI has to play a role here. However, as Maryam Alimardani stated in a 2020 paper she co-authored,

despite the great effort of AI community in developing several classification models for EEG-based emotion and cognitive state prediction, real-time incorporation of these models in a closed-loop interaction with physical robots are yet not adequately explored.

Closed-loop systems are generally much more demanding than open-loop systems. The difference is feedback:

In a closed-loop control system, information flows around a feedback loop from the process to the sensor to the transmitter to the controller to the actuator and back to the process. This measure-decide-actuate sequence-known as closed-loop control-repeats as often as necessary until the desired process condition is achieved. Familiar examples include using a thermostat controlling a furnace to maintain the temperature in a room or cruise control to maintain the speed of a car.

In the case of closed-loop human-robot interaction, the robot can adapt its behaviour to the cognitive and affective states of the human user. However, the design principles for this type of interaction are still in their infancy. If the journey of interaction design – from user-centric and consumer-centric to human-centric and life-centric – is any indication, the UX design of human-robot interaction has a long way to go.

Not least, ethical questions arising from access to human thoughts are relevant. If not managed carefully, this could lead to surveillance capitalism on a completely different level. The idea of letting Elon Musk physically into your brain would only be the tip of the iceberg. We are dealing here with an area that has strong dystopian vibes.

In 1993, Vernor Vinge published a paper beginning with the following abstract:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Concerning the timeframe, he stated:

I’ll be surprised if this event occurs before 2005 or after 2030.

We won’t – and can’t – discuss the singularity here, so I’ll leave you with this reminder that the stakes are high.