Nina Jankowicz is an internationally recognised expert on disinformation and democratisation, and one of TIME Magazine’s 100 most influential people in artificial intelligence.
These are live-blogged notes from a session at NEXT25 in Hamburg. Posts will be improved over the next few days.
There are things we need to worry about, in particular the way that AIs are interacting with our democracy. AI is being used to mislead, but not in the way we were told. President Trump shared a deep fake of him and Obama in the Oval Office, and then Obama being arrested. An Indian news outlet produced an AI-generated article on the subject, which made up a quote from her. And the other AIs picked this up, and the quote was circulating among them as truth.
It was an AI content ouroboros. People are treating AI as the truth, when we should be treating it as an intern.
AI and elections
Last year was both the year of AI and the year of elections. We saw some deep fakes circulating, but they didn’t have a major impact. Russia was using AI chatbots to create content to target the most vulnerable people in the US politically. But we learned about this, run by a US citizen who moved to Russia, before it had too much impact.
We’ve moved from troll farms to one person with a chatbot and an AI image generator in 10 years. We used to identify misinformation by mistakes in idiom or grammar. This doesn’t happen any more. Chatbots can write perfect copy, time its distribution, and create images to go with it.
Last year, there was still a lot of old school misinformation — hacks and leaks, paying influencers to spread messages. Our worries about 2024 weren’t met. But that doesn’t mean we shouldn’t worry.
Corrupting the LLM

Portal Kombat’s Pravda Network is a network of misinformation sites globally. They all look like 1998 websites, with poor UX. Copy is clearly copied across them. There are obvious artefacts of auto-translation. They found 97 low-quality sites pumping out over 3 million articles per year. Why?
They were LLM-grooming. It was meant to be ingested by machines. Even today, if you ask LLMs about Pravda Network, they’ll spit out a lot about what they’re reporting about.
AI can’t reason. So we need to teach users to reason. That’s why the intern example is so important. We wouldn’t publish something written by an intern without checks. And we know our adversaries are trying to manipulate them.
Grok, Musk’s AI, claimed that a real image from LA of troops sleeping on the floor was from the US troop withdrawal from Afghanistan…
Reclaiming reality

So, what can we do?
One thing is the effort to emphasise content provenance. Adobe has been working on its Content Authenticity Initiative since 2018. It adds watermarks to media edited with their tools, allowing us to track the changes made. Eventually, hopefully, people will stop trusting media without the watermarks.
AI companies need to take reasonable steps to ensure models aren’t being influenced by those who don’t want people to have access to good information. Russia is trying to flood them with bad information, so people are misinformed when they go to the ballot boxes. Make sure your models are trained on the best sources.
And if you train on bad sources, add a disclaimer.
Acknowledge the limits of the tools. Tell people they are not infallible. They are not all-powerful and all-knowing. We need to ensure their output is something we can stand behind in a court of law.
Our enemies will not stop using these tools. So we need to adapt. Thomas Jefferson was a proponent of educating the populace to help them understand the reality of the world. We have to educate the populace about what AI tools are – and aren’t.