Scroll Top

Publications

Prebunking AI-generated disinformation ahead of EU elections

European citizens from 27 different member States will vote for the next EU Parliament between June 6 and 9. During the mandate of the current Parliament the EU faced huge and unprecedented challenges, from the pandemic of Covid-19 to the war in Ukraine, which left deep scars in many European societies. Disinformation – more than 85% of people are worried about its impact, and 87% believe it has already harmed their country’s politics, according to a recent UNESCO global survey – is lurking ahead of the elections, ready to exploit the cracks and turmoils in European countries, to polarize the situation and potentially foster the interest of different actors, both national and foreign.

The ability of AI to generate content that can be used to promote disinformation is, according to research papers and various experts, one of the most concerning developments in the field in recent years. In March, The EU Parliament adopted the AI act, a comprehensive regulation of AI, but its provisions won’t be effective in time for the 2024 EU elections. Other EU legal tools – in particular DSA and DMA – can potentially play a role, but the whole legislation in this field is relatively new and untested.

Until now, AI-generated disinformation content is still a small minority of the total detected disinformation in the EU, according to the monthly reports of the EDMO fact-checking network, but some relevant cases circulated in the past few weeks and months and were debunked by fact-checking organizations.

Considering four categories of content – AI-generated images, audio, video and text – and looking at recent precedents (many of them listed in this report of the EDMO Taskforce on EU Elections and in the EDMO monthly briefs), what can reasonably be expected for the upcoming EU elections?

AI-generated images

In recent months various AI-generated images have been spread in different EU countries to convey disinformation. For example, an image of tractors and straw bales in front of the Eiffel Tower in Paris aimed to exaggerate the dimension of tractors’ protests; an image of Frans Timmermans, former vice president of the EU Commission and candidate for the GroenLinks–PvdA alliance in the 2023 Netherlands elections, on a private jet supposedly back and forth to Málaga, with a delicious meal on the table, captioned as evidence of his immorality; an image of Donald Trump with Jeffrey Epstein, to discredit the former US president; an image of a white homeless mother with children, spread to stir animosity against migrants in the context of the housing crisis in Ireland; an image in which the entire floor of a room is littered with pizza boxes, allegedly at the German Green Party conference, used to attack the party, and so on.

In all these cases, all evidence points to the images being not real but created with AI technology because of the details of the image itself: hands, eyes, shapes, and shades are often weird, if not completely unreal, in AI-generated images. This relative ease of spotting AI-generated images for experts, but also for users, is one of the most likely reasons why this technology is not a big tool in producing disinformation yet. In recent crises, like the Israel/Hamas conflict, the well-known technique of re-sharing old images, captioned in a misleading way to induce the impression of being related to current events, was more common compared to AI-generated images.

Considering the precedents and the current ability of AI to generate images, it is possible to foresee a limited or not particularly impactful use of this kind of content to spread disinformation ahead of the EU elections. It can be expected that AI-generated images will be created to foster different narratives, e.g. about elections being rigged, ballots being tampered, illegal migrants participating in the elections; or about political figures doing things they never did; or to convey emotional messages about relevant topics of political debate, and so on.

However, their quality should allow quick and effective detection and debunking, even from average users. Considering this, it is also possible that the quantity of AI-generated images fostering disinformation will not grow before the elections significantly above the percentages observed in the previous months. The technique of re-sharing old images, captioned in a misleading way to induce the impression of being related to current events, is more likely to be widely used to spread disinformation.

Obviously, this scenario can change rapidly, should the evolution of AI technology make creating images more difficult to discern from real ones.

AI-generated audio

Considering the precedents, AI-generated audio spreading disinformation represents the biggest concern at the moment. This technology is already capable of creating content that is very difficult to recognize as AI-generated for the average user, lacking the visual element that often provides crucial hints to immediately recognize its artificial origin, and even AI detection tools seem to fall short.

One example in particular shows how this disinformation technique can be extremely harmful ahead of an election. In Slovakia, just two days before the September 2023 parliamentary elections, thousands of users of social networks shared a suspicious audio file, which presented itself as a conversation between Michal Šimečka, chairman of Progresívne Slovakia (PS), and the journalist Monika Tódová. The audio was allegedly a recording of a telephone conversation between them, in which they discussed how to manipulate the elections in favor of the PS, including purchasing votes from the Roma community.

The quality of the audio was relatively poor, so different experts were able to conclude with a high degree of probability that it was AI-generated. Still, the audio circulated widely during the election moratorium, which made any possible reaction by the affected entity more difficult.

Only a few days before, Ĺ imečka had already been targeted by a similar manipulation: different posts appeared on social networks with an alleged audio recording of him discussing the party’s plans for a sharp increase in beer prices after the elections. The quality of the audio was again poor enough for experts and tools to assess the AI origin of the content quickly. But in the meantime, the false audio circulated virally just a few days before the vote.

Another relevant example that can be mentioned is the AI-generated audio of US president Joe Biden: two days before New Hampshire’s primary in January 2024, thousands of people in the State received what seems to have been a robocall generated by artificial intelligence. A voice that sounded much like Joe Biden told people not to vote in the primary election. Or, again, the fake audio of Alexei Navalny’s mother accusing his wife, Yulia Navalnaya of being a heartless opportunist, basically responsible for the death of his son.

Looking at these previous experiences, it is possible to say that AI-generated audio content is likely to appear ahead of the next EU elections, at least in some countries. They could be used to discredit politicians, promote conspiracy theories about election integrity and/or discourage democratic participation. The number of these incidents can be expected to be probably not very high, considering that in many EU countries, disinformation is yet not strongly characterized by this technology, and the quality should be still not high enough to prevent experts from detecting the artificial origin of the content. Still, it is probable that average users could be tricked with AI-generated audio content and should this happen close to the elections, it will be hard for the eventual debunking content to promptly reach all the citizens that have been exposed to the false audio.

It is worth mentioning a separate but connected possibility, i.e., the release of a real audio informing the population about something that actually happened ahead of the vote – such as a troubling or embarrassing statement by a candidate caught off-mike – with the person(s) involved in the audio claiming that is an AI-generated false content. If the time necessary to disprove such claims is superior to the time before the vote, the subjects spreading disinformation could benefit from the confusion. Incidentally, it is important to remember that the main goal of disinformation is seldom convincing people of the falseness it spreads, but sending the message that it is impossible to know the truth and that every version of reality is equally legitimate.

AI-generated video

The technology to produce AI-generated video is evolving fast. Recently, different videos created by AI, starting from text, have been shared online, showing how difficult it is for an average user to identify the content as artificial and not real. The most advanced tool of this kind is perhaps Sora from OpenAI, the company that created ChatGPT. Sora was announced, showing its capabilities in generating videos more realistic than ever, but it is not yet publicly available, probably also due to concerns about potential abuses, among which one of the biggest is related to mis/disinformation.

The limited release of the best technologies is one of the most likely reasons why fact-checkers in Europe have not detected disinformation content created this way yet: available text-generated videos are hardly realistic enough to pass as real videos. Therefore, the current risk of having video content entirely generated with AI circulating ahead of the next EU elections with the aim of influencing them seems low.

It will be much more likely to detect, ahead of the vote for the EU Parliament, disinformation content where the audio is AI-generated or altered and the video is only slightly modified to confirm the audio. The alteration of the video could be limited to the lips and the facial expressions of the speaker so that words and lips/face are synchronized. The technology to produce this effect is already developed, even if often imperfect, and widely used.

Different organizations of the EDMO fact-checking network detected content of this kind in recent months. For example, a France24 video broadcast was manipulated, making one of its journalists say things he never said through the use of lip sync. In particular, in the doctored video, the journalist says that French President Emmanuel Macron was forced to cancel his visit to Ukraine over fears of an assassination attempt. A BBC video of Greta Thumberg was altered to make her say that human beings should opt for “sustainable tanks and weapons” if they want to continue fighting wars. A video of the Czech actor Ondřej VetchĂ˝, who started fundraising with the aim of sending to Ukraine to purchase drones, was manipulated to make him state that Czechs should worship Stepan Bandera (far-right nationalist Ukrainian politician that collaborated with the Nazis during World War II). A video of the former head of the Ukrainian Armed Forces, General Valery Zaluzhny, was doctored to make him accuse President Zelensky of being a traitor and call on Ukrainians to rise up against his government. Many other detected cases, often pivoting on politicians (e.g. Meloni, Babis, Denkov), were scams.

Considering these precedents, it can be expected that similar content could emerge ahead of the 2024 EU elections to discredit politicians, promote conspiracy theories, and/or spread false news that could damage the democratic process. The threat can be assessed as slightly smaller compared to the one posed by AI-generated audio because videos often provide more elements – to the experts but also to average users – allowing the detection of the artificial origin. Moreover, the existence of an original video with a different audio can usually be easily verified. Still, this kind of content could potentially trick citizens before they vote.

AI-generated text

AI-generated text poses two different kinds of problems. If the text is generated upon a user’s request, and the user will not make this text public, there is the risk – particularly significant for those LLMs that do not have strong safety guard mechanisms and/or rely on controversial datasets – that the information the user will be directly exposed to is incorrect or misleading, and external parties (fact-checking organizations, experts and so on) do not have access to this content to verify it. Like EDMO pointed out in a dedicated article, there is a particular risk stemming from users believing that answers from chatbots based on Large Language Models (LLMs) are the most correct one, and not just a convincing one.

On the other hand, if AI-generated texts are publicly available – e.g. news websites that are entirely made of this kind of content or private users that share online the results to their prompts to LLMs -, disinformation will flow not differently from “traditional” textual false news generated by humans. The most problematic aspect is volume: AI theoretically allows the creation of huge amounts of textual disinformation with a small cost and in a very short time. However, the sheer volume of production is not the most concerning aspect. This lies in the possibility of automatizing with AI not only the creation but also the distribution and the amplification of an avalanche of disinformation content, i.e., through bots.

It is not unlikely that this will happen ahead of the next EU elections, but its impact could relatively easily be limited by an accurate application of the main social media platforms’ policies (e.g. Meta, X, TikTok) aimed at detecting and countering inauthentic behavior and coordinated campaigns to disseminate disinformation.

Conclusion

AI-generated content used to convey disinformation is a significant risk ahead of the EU elections in 2024. It can be used to discredit politicians and/or other public figures, to spread false allegations and conspiracy theories and, in general, to attack the integrity of the democratic process.

AI-generated images appear to pose a medium risk, considering their current quality. AI-generated videos are potentially harmful, in particular the ones where the video is real and the audio is altered with AI, but their debunking is relatively easy for now. AI-generated texts currently pose a relatively low risk, but it is important to react quickly and effectively to stop any disinformation campaign aimed at exploiting the potentially huge volume of disinformation that can be created this way. AI-generated audios seem to pose the biggest risk: they are more difficult to debunk and can more easily trick users.

The first and most important line of defense is citizens’ awareness: when faced with content that conveys a strong emotional message that can impact the citizens’ determination to vote or their political preferences and that is not confirmed by traditional media, nurturing a healthy skepticism, looking at potentially revealing details and awaiting further confirmation before sharing are best practices that can significantly diminish the spread of disinformation.

Tommaso Canetta, deputy director of Pagella Politica/Facta News and coordinator of EDMO fact-checking activities

Photo: Canva, Tommaso Canetta