LJ Rich: The Taste of Music, Synesthesia and the future of AI creativity

AI is limited by what we train it. But, as people with different perceptions of the world like LJ rich well understand, diversity of perception can trigger creativity.

World-renowned musician and BBC TV broadcaster LJ Rich presents on the international technology show, BBC Click. A highly accomplished pianist, she uses music to connect the emotional and rational minds of audiences, unlocking creative thinking and solving real-world problems.


Watch the complete performance


Human inspiration is very hard to explain.

Gershwin was inspired by the sound of a clarinet warming up in an orchestra – it was his prompt. His culture, his experience, his straining model all helped him write rhapsody of blue. LJ Rich’s training model is infused with Synesthesia. She hears flavour, can remember every song she’s heard, and the sound of The Beatles tastes like iced buns to her. She sees sound. Likewise, she can see music in 3D and in colour, which is matched to pitch.

Imagine growing up thinking that everyone’s brain was like this? She struggled with things everyone else sees as normal, like mixed flavours and tactile wallpaper in restaurants. They created a sensory overload. Her subjective experience was very different from other people’s.

AI experiences senses differently

AI training is deeply subjective – and that impacts its output. Most music AI is trained on western music, following a well-worn format. Many, many songs are based on a four chord structure, one we feel familiar with. The Spice Girls and Marvin Gaye have the same chords in a different order. Even musicians find that hard to spot – but machines are really good at it.

On the other hand, humans understand the feel and sound of music, and machines really struggle with that. We recognise Beatles songs not because of data in the suit, but because of data in us. She managed to make a Beatles-like version of Call Me Maybe using AI – but it took her 10 hours, after many days of reading and learning. But now, you can do similar things in minutes. But the results tend to go weird after than 30 seconds – and it’s all based on western music.

Humans understand words like “funky” – machines struggle. A diversity of human experience in the model enhances it. And that can make AI better.

However, what they can’t do is do original creativity. True creativity is still absent from the machines. So, we remain the creative forces, and to refresh our creative ability, we need to catch new experience and increase our training set…