A wave of generative AI disinformation?
Claes de Vreese
Distinguished University Professor of AI & Society
University of Amsterdam
The broad roll out of generative AI tools poses new challenges for democracies: Generative AI offers the opportunity to produce inauthentic content at scale. This affects both the detection and combating of misinformation. These developments take place at a time where technology rapidly changes, democratic institutions are under pressure, elections in big countries underway, and new regulatory frameworks still in the make.
Key words: artificial intelligence, generative AI, disinformation, democracy
Research on disinformation is booming. Valuable steps are taken towards the detection of disinformation, unravelling disinformation strategies and weaponization, analyzing patterns of disinformation, understanding the psychological underpinnings of its effects, and chartering citizens’ responses to, sharing of, and competences in dealing with disinformation.
This research is important, bears relevance for society and is ideally part of informing public discussions of the phenomenon as well as policy initiatives, training, and literacy programs. The work conducted in the EDMO hubs across Europe play a pivotal role and EDMO facilitates overviews on newly published research in the field.
So far, so good. The recent mass scale rollout and access to generative artificial intelligence tools is likely to mark a new wave of disinformation in which these technologies play a key role. Artificial intelligence obviously has a longer history. However, the initial waves of AI innovations were largely inconsequential for the news and politics ecosystem. The recent releases of large language models alongside image and sound generative tools significantly alter this relationship: the opportunities are abundant for producing content at large scale which is auto-generated, inauthentic, and potentially outright hallucinated and false.
AI and (political) journalism
A significant sub-domain is developing around changes in (political) journalism as a function of AI. AI can be deployed in political journalism in anything from the research phase (idea generation, data mining), to specifying story angles, fact checking, and in the text development phase. AI models can already provide text drafts, which might be further refined and edited. There is a lot of emerging scholarship on this topic and Nicholas Diakopoulos keeps an overview of the various ways generative AI affects the newsroom (https://generative-ai-newsroom.com). Pavlik (2023) wrote, with ChatGPT, an essay about the capacity and limitations of generative AI models for (political) journalism and highlighted the need for incorporating this in journalism and media education.
AI and mis/disinformation
There is great concern that new AI technologies will further boost concerns about mis- and disinformation. Readily available tools may lead to an influx of deep fakes, also dubbed ‘cheap fakes’ (Vaccari & Chadwick, 2020) because of the easy access to such technology. Research on this topic is divided on what the effects are: Dobber et al. (2021) conducted one of the first studies of the impact of deepfakes (manipulated, AI generated visual content). Participants who saw a deepfake about a politician held more negative attitudes toward the politician than the people who viewed a neutral control video. This study established that deepfakes indeed have the potential to cause harm. Hameleers et al. (2022) found that deepfakes are, contrary to expectations and public concerns, and when compared to text or image-only disinformation not per se more credible. Much more research will emerge on this topic. AI and political communication research will foreground work on authenticity in the context of mis- and disinformation. AI will be leveraged to detect inauthentic text, visual, and sound, but thetechnologies to create these are likely to develop faster, in combination with advanced amplification techniques and advances in recommender systems.
Politics of AI
Another emerging topic, which is not central in the political communication field, but is increasingly defining the boundary condition evolve around the politics of AI. This both pertains to geo-political questions (the global AI ‘race’), the political economy of AI businesses, and the regulation of AI (Mugge 2023). The discussions about the disruptive nature of AI technologies are loud (and include letters from captains of industry to temporarily pause AI developments). At the same time legislators worldwide are working on some of the first (inter)national AI regulations. These should be seen in the light of more fundamental legal arrangements around for example human rights of intellectual property as well as in the light of new regulations such as the European Union Digital Services Act and Digital Markets Act. Collectively these fundamental and new pieces of regulation set the parameters for the roll out of AI, also in the realms of media, politics, and democracy. On top of this, both for example the US, China and the EU are far advanced with some of the first legislation on AI specifically. This is likely to introduce guardrails and limitations on the adaptation of the techniques in some areas and ensuring transparency and user agency in other areas. Debates about such regulatory developments may trend towards the negative and focusing on AI as challenge to public values and society, while questions about how to leverage and build systems that are optimizing for advancing public values get less attention (Helberger & Diakopoulos, 2022).
Politics with AI
With generative AI tools being widely available the opportunity to deploy these in political campaigns are also there. In 2023 we are witnessing a first round of AI generated ads, ranging from the anti-Biden ad launched on the day he announced his re-election bid, to audio and visually AI manipulated content in Slovakian and Polish elections. With 2024 being a ‘super election year’ with elections in places like Taiwan, Indonesia, Mexico, the US, and the EU, it will be interesting to monitor which campaigns and actors choose to (not) deploy AI techniques as part of their campaigns. The ability to create content and provide mis and disinformation in political campaigns, possibly in combination with online (micro)targeting is a daunting challenge for democracies.
In the coming period we will have to come to terms with pertinent issues around training data, biases, intellectual property, and the first AI regulations, to mention just a few. These all have a bearing for understanding, mapping, and combating misinformation. We know a lot about literacy interventions, fact-checking, inoculation, and corrective measures. But it is probably also safe to say that we are largely unprepared for this wave. Research must step up to the challenge, but this is only possible with the right conditions and resources to do so. The new provisions of the Digital Services Act around data access for researchers take effect in 2024 and this should come hand in hand with support schemes for researchers to investigate these questions. The questions raised by generative AI around democracy, authenticity, and truthful information are too important not to be at the very centre of our attention.
Published under Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0)