Part of the problem and part of the solution: the paradox of AI in fact-checking
The views expressed in this publication are those of the author and do not necessarily reflect the official stance of the European Digital Media Observatory.
Authors: Laurence Dierickx, Carl-Gustav Lindén, Duc-Tien Dang-Nguyen. University of Bergen, NORDIS.
The paradox of AI-based technology in fact-checking lies in its dual nature as both a tool to help verify facts and a tool to create or amplify information disorder.
AI was at the forefront of the Brussels EDMO Annual Conference organised in May 2024. The panel on “AI part of the solution” highlighted AI’s dual role in fact-checking and fighting misinformation. AI helps identify disinformation actors, trace narratives, and assist fact-checkers by spotting patterns and verifying claims. However, human-in-the-loop approaches are needed to guarantee the accuracy and reliability of the verdicts. At the same time, AI can improve the quality and quantity of disinformation, making it both an ally and an adversary of fact-checkers. One year later, how has this landscape changed?
Disinformation, censorship and ethical concerns
Over the past twelve months, much has been written and debated about whether “the AI threat” – specifically “the generative AI threat” – has been overstated. Of course, generative AI (GAI) did not invent disinformation; information disorder existed long before ChatGPT emerged, and many people have been exposed to misleading content online for years. Concerns about generative AI’s impact on disinformation are genuine, not just moral panic.
Malicious actors are increasingly using AI to amplify disinformation. One notable example is a Russian network that used AI chatbots to spread pro-Kremlin narratives, infiltrating major online platforms to spread false information. According to an audit by NewsGuard, the top ten generative AI tools contributed to Moscow’s disinformation efforts by repeating false claims from the pro-Kremlin Pravda network 33% of the time.
DeepSeek is an energy-efficient, open-source AI that is gaining attention for its low development costs and real-time partial model activation, which reduces computational demands. This makes it appealing to developers seeking affordable AI solutions. However, DeepSeek also has some drawbacks. Developed under the supervision of the Chinese Communist Party, it strictly avoids sensitive topics, enforces censorship and monitors user behaviour, making it an instrument for propaganda. In addition, several lawsuits for data protection violations are underway in several EU countries.
Meanwhile, Musk actively contributes to spreading disinformation on X (formerly Twitter), the platform he owns. Grok, its AI chatbot, has been widely criticised for generating and amplifying disinformation. It operates with minimal oversight, generating responses influenced by unfiltered and often misleading online discourse. The AI has been found to spread politically biased content, offensive language and conspiracy theories. One example of its inaccuracies was when it falsely claimed that Kamala Harris had missed voting deadlines in nine states, according to a study from Northwestern University in the US.
The new generation of search engines powered by large language models—generative AI models focusing on textual content—also contributes to misinformation. A recent study from researchers at the Tow Center for Digital Journalism found that more than 60% of responses from AI-powered search engines were inaccurate. It highlights a broader challenge: while generative AI can improve information retrieval, it often fabricates or distorts facts, increasing the risk of fuelling misinformation. It is particularly worrying since LLMs-based search engines have started to infiltrate journalism and fact-checking practices.
Tools for amplifying information disorder
Researchers had already recognised the challenges posed by LLMs before ChatGPT’s launch. Bender et al.’s foundational paper on “stochastic parrots” emphasised the model’s tendency to hallucinate and propagate biases. They can be seen as ideal tools for disinformation disseminators to create and distribute manipulated or fake content quickly and on a large scale, which was particularly feared in the run-up to the EU elections. However, these fears were unfounded, as AI-driven disinformation did not spread as widely as expected. However, it does not mean the subsequent electoral campaigns will be immunised. One of the lessons learned from the EDMO’s Election Task Force is that preparedness allows for anticipating potential threats and establishing rapid response mechanisms.
The challenges extend beyond the creation of fake content; technology primarily acts as an amplifier. Social media AI algorithms prioritise emotionally engaging content to maximise screen time for ad revenue, and this content often includes disinformation and propaganda. In addition, human amplifiers further accelerate the viral spread of misleading narratives. It was evident in the case of influencers paid to support the now-dismissed candidate in the Romanian presidential elections, demonstrating how algorithmically amplified content combined with coordinated human promotion can manipulate public perception at scale.
AI-related topics have not been particularly prominent in the fact-checks delivered by the EDMO network, with fact-checks mentioning AI accounting for only 1.3% of the total and generative AI (GAI) slightly higher at 1.34%. However, their importance is slowly growing. This trend is slightly more pronounced within the NORDIS hub, where AI-related fact-checks show a sharper increase over the past year. Similarly, while the number of AI-related fact-checks remains relatively low in EDMO, it has increased over time. The same pattern emerges within the NORDIS hub but with a slightly stronger upward trend, suggesting that AI and GAI are gradually becoming more relevant topics in fact-checking, particularly in the Nordics.
While the proportion of deepfakes has not increased significantly, they remain a significant concern due to their potential to deceive and manipulate audiences. A primary concern is the rapid evolution of techniques, particularly the rise of generative AI, which makes producing highly realistic fake content easier and faster. Another worrying trend is scammers using deepfake technology to impersonate celebrities—politicians included—and promote fraudulent cryptocurrency schemes. While not always politically motivated disinformation, this manipulation poses serious risks by misleading people and exploiting their trust for financial gain.
Using generative AI to fight against information disorder
GAI technologies, large language models (LLMs) in particular, have the potential to assist fact-checkers at various stages of their work. However, while AI can significantly aid the detection and mitigation of misinformation, risk mitigation strategies are required to ensure its responsible use. These can be approached through three complementary lenses: 1) AI literacy, to understand the limitations and capabilities of these tools and to use them responsibly; 2) ethics, to promote human oversight to validate all results produced by these systems, and improved 3) prompting techniques to help minimise unintentional hallucinations and improve accuracy by guiding the model towards more reliable outputs.
In the Nordics, the integration of GA-based tools in fact-checking varies from country to country. In Finland, they are used experimentally, mainly for data analysis and sometimes for tasks such as information retrieval through LLM-based tools such as Perplexity AI. However, their reliability is still an issue. A Finnish fact-checker noted that they haven’t found significant use for these tools yet, as they believe human expertise is more reliable. In Denmark, GAI is restricted from gathering evidence, and its use is limited to tasks such as translating emails and improving written content.
Meanwhile, in Sweden, the uptake of GAI is minimal, mainly due to scepticism and limited knowledge of its potential benefits. At the same time, there are notable examples where GAI has proven useful. For instance, Faktisk, the Norwegian fact-checking organisation, has used GAI to produce informative maps, demonstrating the potential to support fact-checking tasks appropriately.
What’s next? The need to keep humans at the centre
Research shows that human expertise is still essential for AI fact-checking because AI systems cannot fully grasp context, intent or credibility. One challenge is encoding complex concepts such as ethics and critical thinking into algorithms. Although generative AI poses risks, ongoing research is developing tools to assist fact-checkers in navigating the ever-evolving landscape of AI models and manipulation techniques. In this area, LLMs are being explored to improve real-time coverage of political discourse or explain results provided by AI tools. Despite these advances, such tools are not yet fully operational, primarily due to their high computational resource demands.
VeriDash is an example of a project currently under development at the University of Bergen that builds on these lessons. It is an AI-driven, open-source dashboard designed to enhance the multimedia verification process for fact-checkers. VeriDash uses AI tools such as automated transcription and geolocation to make multimedia verification simpler. Its key strength lies in its human-in-the-loop approach, which keeps human expertise central to fact-checking.
What’s next is also about strengthening collaboration between practitioners and fact-checkers, building a bridge between expertise and AI tools. Initiatives such as the ACMMM25 Grand Challenge on Multimedia Verification are critical to fostering this synergy and encouraging innovative solutions to combat misinformation by joining forces and knowledge. Other notable examples include Horizon Europe projects supporting such collaboration and advancing responsible AI in fact-checking.
With AI now responsible for generating at least 57% of online content, concerns about truth and trust are growing. Malicious actors use AI-generated synthetic media to spread targeted false narratives, contributing to blurring lines between fact and fictions, making it a growing challenge for all the stakeholders engaged in combatting information disorders.