The Monetisation of Outrage: From AI Avatars to Electoral Interference
The views expressed in this publication are those of the author and do not necessarily reflect the official stance of the European Digital Media Observatory. This text has been published as part of the first edition of the new monthly EDMO Signals & Noise newsletter. Sign up here to receive future editions directly to your inbox.
Author: Matteo Bergamini MBE, Founder and CEO of Shout Out UK
In the modern digital landscape, the line between online subculture and democratic threat has grown dangerously thin. For years, those of us working in media literacy have warned that the profit motives of big tech platforms are fundamentally at odds with the integrity of our information ecosystem. But, for Shout Out UK (SOUK), it was the targeted hijacking of a fictional schoolgirl named ‘Amelia’ that transformed these theoretical concerns into an urgent mission to dismantle the business model of digital hate and disinformation once and for all.
In 2023, SOUK, the non-profit I founded in 2015 dedicated to promote and deliver Political and Media Literacy education, launched Pathways, a Home Office-funded interactive game designed to build resilience against radicalisation through critical thinking. However, following a misrepresentation of the game in the press, Extreme Right Wing networks, most based outside the UK, seized upon the story and began spreading AI powered deepfakes and disinformation. Within hours, Amelia, a minor school girl character from the game, had been transformed into a vehicle for extremist imagery, antisemitic and Islamophobic tropes, and sexualised AI-generated content.
Crucially, this was not just a campaign of harassment and disinformation; it was a financial operation. Anonymous developers on Telegram launched a meme coin called ‘$AMELIA’ to profit from the viral hostility. AI chatbots, including Grok, were deployed to generate memes at scale and provide false updates to the story, ensuring the “outrage” remained profitable for as long as possible. We watched in real-time as a minor character in a localised game was weaponised into a “pump and dump” scheme to scam often lonely, predominantly white boys.
The “Amelia” incident provided a chilling blueprint for how quickly, in the age of AI, digital media can be weaponised by extremists to drive revenue and spread disinformation and hate. As part of our ongoing commitment to uncover these operations,
SOUK partnered with The Bureau of Investigative Journalism (TBIJ) on a recent joint investigation, in which reporter Effie Webb exposed similar online hustles that are becoming increasingly common. We are now seeing the same playbook used in an even more sophisticated way in the case of Danny Bones. As revealed in our recent joint investigation with the Bureau. Danny Bones is a supposedly working-class British rapper who is, in fact, an AI-generated character created by the anonymous “Node Project”.
The reaction to the TBIJ investigation followed the Amelia pattern with terrifying precision. When exposed, the Node Project doubled down. They used the media attention to cast themselves as victims of mainstream media smears, rallying supporters to join paid membership tiers ranging from £20 to £100. Simultaneously, four Bones-themed meme coins appeared on the Solana blockchain, with users in crypto circles explicitly citing the media coverage as a reason to “pump” the token.
As engagement has begun to drive direct revenue, the content has become even more extreme. Danny Bones has moved from political commentary to posting anti-Muslim material and AI-generated imagery of masked men storming the UK Parliament. This is the monetisation of outrage in its purest form: a cycle where extremist content drives views, views drive algorithmic amplification, and amplification drives profit.
This experience has made one thing clear: reactive content moderation is no longer enough. We must move toward systemic solutions that obstruct the ability of platforms and influencers to monetise disinformation. Working with the APPG on Political and Media Literacy, a cross-party group of UK parliamentarians, we are looking to support law-makers who are currently scrutinising the Representation of the People Bill, a piece of legislation designed to strengthen election integrity. To this end, we have developed a policy brief with our core recommendations:
- Mandatory Demonetisation: Platforms like TikTok, X, and Spotify must follow YouTube’s lead and ensure that AI-generated political manipulation, disinformation and hate speech are ineligible for ad revenue or creator funds.
- Proactive Enforcement: The UK’s regulator Ofcom must have the power to suspend monetisation privileges for repeat offenders who use AI to bypass community standards.
- A “Levy for Literacy”: Alongside the UK House of Lords’ Digital and Communications Committee, we are calling for a 1% levy on the UK profits of major tech platforms. These funds should be used to support teacher training and media literacy initiatives, providing a necessary counter-balance to the harms created by algorithmic amplification.
We are at a pivotal moment in the UK where the transition from legislation to active enforcement is being shaped. These AI-driven for profit disinformation schemes also represent a stress test for the EU’s Digital Services Act. Unless platforms do more to prevent the monetisation of this synthetic content, the DSA’s promise to defend the integrity of the information ecosystem will remain unfulfilled. By dismantling the profit motives behind digital disinformation, we can ensure that European-wide democracy remains a space for human discourse, not artificial manipulation.