Meet the Future of AI 2024 – Generative AI and Democracy: A Summary of Event Findings
Following the success of the 2023 edition and the White Paper that resulted from the event, researchers from EU-funded AI and disinformation projects, carried out in dialogue with EDMO, have once again joined forces to hold the second edition of the ‘Meet the Future of AI’ event.
Generative AI technology continues to evolve at a rapid pace, with new Large Language Models (LLMs) appearing on a regular (monthly or even weekly) basis. Hyper-realistic AI-generated and manipulated media including image, audio, and video are becoming widely accessible through a variety of commercial and open-source tools. Off-the-shelf software like browsers and productivity applications now integrate generative AI tools.
The second edition of the ‘Meet the Future of AI: Generative AI and Democracy’, held in Brussels on 19 June 2024, addressed the use of AI for and against disinformation, its perception by media professionals and citizens, the relevant regulatory landscape, and the huge potential of Generative AI as a basis for fighting disinformation. The first edition of the event mapped the domain and identified key challenges. Disinformation in 2024 is still a pressing concern, and tops lists of short-term risks of AI, such as the World Economic Forum Global Risks report. These meetings are the collaboration of European projects including AI4Media, Titan, veraAI, AI4Trust, AI4Debunk and AI-CODE.
Numerous developments over the last year have increased the risks of AI. This is one of the most election-dense years in recent history, and the fact that election periods offer fertile ground for disinformation campaigns raise questions around the potential role of generative AI as a tool for voter manipulation. Several national elections have or are taking place in 2024, including European Parliament and US Presidential elections.2 This also been a year in which the Digital Services Act (DSA) entered into force, and the highly debated AI Act took its final shape and was approved by the EU. Several questions arise with respect to whether these regulations are appropriate and sufficient to mitigate risks arising from the wide deployment of AI in an increasingly digitalised society.
Our event speakers and participants brought different perspectives and learned lessons to the table, all in an effort to make sense of this complex and fast evolving landscape. A summary of the talks is available on this veraAI blog post. Out of these numerous and diverse talks and discussions, some general conclusions can be drawn and are highlighted below:
- Is AI generated disinformation already mainstream? While there was a marked increase of disinformation cases involving synthetic images, deepfake videos and voice cloning compared to previous elections, the volume and impact of AI generated disinformation in recent elections was less than anticipated (or feared). This may be attributed to a number of factors including the continuous and effective use of non-AI disinformation tactics (e.g. ‘cheapfakes’), the fast debunking efforts by fact-checking organizations on cases that emerged, and the efforts by digital platforms to detect AI generated media and prevent their spread, among others.
- How do citizens perceive generative AI and AI generated disinformation? Disinformation researchers and experts allude to a digital society where generative AI technologies are well understood and ubiquitously used. In reality, recent surveys among citizens indicate that a relatively small part of citizens are aware and appreciate the capabilities of modern generative AI tools, and the large majority of these people are only familiar with ChatGPT. Also, there are important differences across countries with respect to the perceived level of risk of AI-generated disinformation, and across sectors with respect to citizens’ trust on the positive use of generative AI technologies.
- What is the current maturity of technical solutions against AI generated disinformation? A wide range of technologies and tools are currently developed to counter (generative AI) disinformation. These range from open-source tools such as the Fake News Debunker (aka the verification plugin developed by EC co-funded projects InVID, WeVerify and veraAI) to proprietary tools and technologies by big tech companies, and include capabilities such as synthetic media detection, keyframe extraction, reverse image and video search, and content watermarking, among others. Experts using these tools have reported that even though they are an essential line of defense against disinformation, they are still facing several issues, including lack of reliability or challenges with trusting their outputs (e.g. lack of explanations). It was recognized that there is no single solution that can act as a ‘silver bullet’ and that this is a constant battle between new generative AI risks and new defensive mechanisms.
- Is there sufficient regulation in place to address the challenge? European regulation is advancing quickly and tries to follow the rapid developments in the field. Provisions in the DSA and AI Act appear to be well designed and could potentially be a valuable tool for authorities to address several of the risks arising as a result of generative AI technologies. However, their application into national contexts and their enforcement seems to be daunting and requires comprehensive resources, processes and tools to be successful. Therefore, instead of new regulation, focus should now be placed on assessing the existing regulation and ensuring its successful implementation.
- Can generative AI be leveraged in other creative ways to counter disinformation? Beyond using AI for disinformation detection, which is recognized as an essential need, there are opportunities for leveraging AI and generative AI in other creative ways against disinformation. These could include, for instance, automating data extraction and analysis pipelines for supporting auditing and transparency reporting, building new tools that are needed by authorities to monitor the compliance of platforms with new regulation, building new support tools for media professionals and exploring the use of AI for stimulating citizen critical thinking and awareness.
Much research has been done in the field of countering disinformation, especially on how Generative AI poses new and constantly evolving challenges. Tackling these challenges requires a collaborative approach that involves a variety of sectors and stakeholders, from regulation to tech to civil society. The European projects that have joined forces in the ‘Meet the Future of AI’ context are ready to do their part in tackling parts of these challenges. We invite others to join us in the work of defending democracies and the values of free and pluralistic societies.
Author: Symeon Papadopoulos (Centre for Research and Technology, Hellas)
Editors: Jochen Spangenberg (Deutsche Welle), Gina Neff (University of Cambridge)