Maryam Alimardani: the next frontier of human cognition

Bridging the brain and the computer is the next step in interface design — and AI is helping make it happen.

Maryam Alimardani is an associate professor at Vrije Universiteit Amsterdam working in the domains of brain-computer interfacing (BCI) and human-computer interaction (HCI). She holds a PhD from the Intelligent Robotics Laboratory, Osaka University, Japan. In her research, she develops BCI systems for personalised interaction with technology, particularly for the purpose of adaptive training and learning.


These are live-blogged notes from a session at NEXT24 in Hamburg. As such, they are prone to error, inaccuracy and lamentable crimes against grammar and syntax. Posts will be improved in the coming days and this note updated.


Imagine a paralysed person, who wants to move their wheelchair, via their mind. The ambition of human/machine interfaces is to facilitate that — and the technology we’re developing to do it could have implications for everyone.

So, how does the technology work? We need to collect signals from the brain to do this. And we can do that either invasively, with an implanted chip, or non-invasively, with a wearable cap that collects brain signals.

Once those brain signals are collected, AI comes into play for interpreting them and then giving meaningful feedback to the user. This is the BCI (or neural feedback) loop. Neuroscientists have been studying spikes in neural activity and correlating them to activity for years. Now, they’re training machine learning models on what they know. And then deep learning helps improve the performance of the systems, by minimising the involvement of the human researchers. It makes the associations on its own.

Brain-computer interfaces in use

This sort of technology can be used for more than just helping people achieve mobility. For example, it can be used to help stroke rehabilitation. The AI can help provide therapeutic feedback in the same way that physiotherapists have done manually in the past.

But what does that mean for healthy folks? It can be used for gaming, for entertainment, for marketing, for education, and sports research…

For example, you need to learn some complex tasks — like flying a plane. Pilots must master multiple complex tasks and also be prepared for disaster management. But pilots are at risk of cognitive overload, so they start missing things, particularly while training. So, people are now using brain-computer interfaces to monitor pilots during training, and then build personalised training plans, so they don’t reach the overload point. If the brain gets overloaded, they don’t learn.

It’s hard for the human brain to sustain attention for long periods of time. So, could embodied tutor agents monitor the attention level of students? If the learning content was adapted based on the attention levels of the learners, they retained more.

The brain at play

In gaming, they’re not taking away the joystick, but a brain-computer interface can offer other ways of controlling elements of the game. It can also be used for attention training games — very useful for ADHD people. Gamified neuro-feedback enhances the impact of attention training.

In sports, professional athletes know mental training is as important as physical. They release moves and techniques in their minds. That could be improved by using gamified neuro-feedback training for them to see the impact of their thoughts on the potential outcome of a game.

How about marketing? One firm is using neuro-feedback to test packaging — if they can see the signals of intense attention towards a piece of packaging, it suggests that people will learn more from the packaging. But we could use this for environmental reasons, by focusing consumers’ minds on the sustainable aspects of the product.

Do you want to stimulate creativity? Grab yourself some spatial computing goggles. Viewing 3D media activates creative thinking more than 2D images on a screen.

This technology is allowing people to connect with artificial limbs. She showed an example of a person using an artificial limb to reach out and touch his girlfriend for the first time in five years.

Challenges

So, how do we bring this to the mainstream?

System performance

We still lack enough neural data, and we need to improve the hardware so we can filter out background signals better when we’re collecting it live.

Individual differences

We’re all different, and we have different brain signals. We need lots of data about each individual we work with.

Ethical considerations

What about brain hacking? The privacy of my own thoughts? There are three main areas of ethical concern:

  • Physical — is it safe to proceed with neural implants? How will it impact the anatomy of the brain? Will tissues accept this?
  • Psychological — will the agency of a person over their body change over time, if new devices make them into a cyborg?
  • Social — what if your neural prosthetic slaps another person? AIs make mistakes. Who is responsible? If it becomes normal to have them, what if you refuse an implant? If you refuse one to address an issue, are you choosing to be a burden on society?