Scroll Top

EDMO Scientific Conference 2024

Date
26 February 2024 - 27 February 2024 - CET
Location
University of Amsterdam, The Netherlands

Navigating the Complex Landscape of Disinformation 

Join the EDMO Scientific Conference on Disinformation, an interdisciplinary gathering of minds interested in unraveling the intricacies of disinformation in today’s rapidly evolving information ecosystem. Hosted by the University of Amsterdam, #EDMOsc24 aims to facilitate a comprehensive dialogue on the challenges, impacts, and strategies for addressing disinformation across various fields.

The EDMO Scientific Conference is a key opportunity to foster coordination among EU researchers, encourage comparative research, identify potential research gaps, and ultimately, build a sense of community among EU academics working on disinformation.  

If you would like to watch a live-stream of the conference talks, you may sign up below. You will receive information about the online viewing of the conference via email.

Registration for this event is closed.

All tickets for in-person participation have been claimed.  

Programme

Day 1 – Monday, 26 February (12:00 – 21:15 CEST)

12:00 – 12:45

Room: C10.20 

Lunch Buffet & Registration
12:45 – 13:00

Room: C10.20

Welcome and Introduction

Claes de Vreese | EDMO Director for Research, University of Amsterdam & EDMO

13:00 – 13:45

Room: C10.20

Opening Keynote

Chair: Claes de Vreese, University of Amsterdam & EDMO

Sander van der Linden | Professor of Social Psychology in Society, University of Cambridge

14:00 – 15:15

Room: C10.20

Parallel Sessions 1 & 2

Session 1: Truth (Mis)Perceptions in The Face of Disinformation: Cross-Cultural Perspectives

Chair: Patrick van Erkel, University of Amsterdam

Decades after the scientific debate about the anthropogenic causes of climate change has been settled, climate disinformation still challenge the scientific evidence in public discourse. Here, we investigate how a conservative political ideology is associated with impaired discrimination of accurate information about climate change and with susceptibility to climate disinformation across twelve countries (USA, Canada, UK, Ireland, Australia, New Zealand, Singapore, Philippines, India, Pakistan, Nigeria, and South Africa). We analyzed a secondary dataset of participants (N=1721) from these twelve countries randomly allocated to a passive control condition or to consecutively receive twenty real climate disinformation statements. All participants then partook in a truth discernment task, where they had to discriminate between true and false climate-related statements either in support of or asking to delay climate action. We show that conservative political ideology selectively impairs truth discrimination of false statements delaying climate action (e.g., “Carbon Dioxide is Not a Pollutant, but a Benefit to the Environment.”), but does not affect true statements supporting climate action (e.g., “Rising seas could displace hundreds of millions of people by the end of the century.”). Instead, reading climate disinformation selectively impairs conservatives’ truth discrimination of true statements supporting climate action, but does not increase belief in false statements delaying climate action. These findings suggest people do not engage in motivated reasoning, but seem to rather expressively respond according to their partisanship position. We discuss implications of these findings for designing evidence-based interventions to fight climate disinformation in a warming world.

Authors: Tobia Spampatti1, Ulf Hahnel2, Tobias Brosch1

1University of Geneva

2University of Basel

Information disorders present a formidable challenge to the fundamental principles of liberal democracy, as they distort the process of deliberation among citizens. In environments where individuals are exposed to divergent versions of “truth” and fail to reach a consensus on even basic facts, the consequence is a growing intolerance towards differing opinions. This pervasive divergence in beliefs fosters the creation of echo chambers within polarized societies, further exacerbating social and political divisions. From a comparative perspective, Turkey offers an intriguing case study for understanding the dynamics of polarized politics and its profound impact on the information ecosystem. This research article draws on a survey conducted in December 2020, featuring a sample size of 1,629 respondents representing the Turkish population. The primary objective of this study is to delve into the decision-making processes of ordinary citizens when it comes to sharing news.
Our investigation relies on a conjoint experiment incorporating key attributes: the source (pro-government, anti-government, and neutral), the topic (political, COVID-19, and neutral), and the veracity of the news (true or fake). The study findings reveal a compelling pattern: the propensity to share fake news is notably lower when compared to sharing true news. However, this distinction between true and fake news dissipates when the source aligns with one’s political orientation (pro or anti-government) and when the news pertains to political matters. A micro-level analysis of the data provides us with valuable insights into the mediating role of confidence in the government within the context of Turkey’s polarized political landscape. These findings shed light on the intricate interplay between information consumption, political beliefs, and societal polarization. Ultimately, our study contributes to a deeper understanding of the multifaceted challenges posed by information disorders and their impact on democratic discourse, particularly within contexts marked by political polarization.

Authors: Emre Erdogan1, Pinar Uyan-Semerci1

1Istanbul Bilgi University

A growing body of research on the reception of dis-/misinformation (Wagner & Boczkowski, 2019) demonstrates the importance of gaining a better understanding at how people make sense of the actors, content, processes and debates of dis-/misinformation. Building upon the concept of “folk theory” – understood as the articulation of experiences, beliefs, suppositions and/or, simplifications through which lay people generalise a certain view of the world (Nielsen, 2016) –, this paper focuses on how individuals make sense of the nexus between dis-/misinformation and democracy – i.e. “info-democratic disorders” -, with a specific focus on perceived cultural specificities or differences. To do so, our analysis stems from 30 semi-directive interviews with social media users who engage actively, and in various ways, with dis-/misinformation on social media. The sample spans across three communities – i.e. French-speaking Belgium, Flemish-speaking Belgium, and Luxembourg – and across the political and ideological spectrum. First, a diversity of folk theories and sub-theories of info-democratic disorders are identified, ranging from (among others) conceptualisations of mainstream media doing their job and only making minor mistakes, to different critiques of “poor journalism” and the claim that “official”/legacy media are participating in conspiracies led by economic and political elites to keep people in fear. Then, the study provides insights into linguistic and cultural variations of these folk theories. In doing so, it sheds light on how culturally situated beliefs inform individuals’ theorisations of info-democratic disorders and therefore provides a nuanced picture of the complex and multifaceted landscape of dis-/misinformation in the three communities.

Authors: Victor Wiard1, Geoffroy Patriarche1, Daphné Chapellier1, Thomas Jacobs1

1UCLouvain

Communicating the scientific consensus that human-caused climate change is real increases climate change beliefs, worry, and support for public action in the US. In this preregistered experiment, we tested two scientific consensus messages, a classic message on the reality of climate change and an updated message additionally emphasizing scientific agreement that climate change is a crisis. Across online convenience samples from 27 countries (N = 10,527), the classic message substantially reduces misperceptions (d = 0.47) and slightly increases climate change beliefs (d = 0.05-0.09) and worry (d = 0.04), but not support for public action directly. The updated message is equally effective but provides no added value. Both messages are more effective for audiences with lower message familiarity and higher misperceptions, including those with lower trust in climate scientists and right-leaning ideologies. Overall, scientific consensus messaging is an effective, non-polarizing tool for changing misperceptions, beliefs, and worry across different audiences.

Authors: Bojana Veckalov1, Sandra J. Geiger2, Bastiaan T. Rutjens1, Mathew P. White2, Frenk van Harreveld1, Frantisek Bartos1, Kai Ruggeri3, Sander van der Linden4

1University of Amsterdam

2University of Vienna

3Columbia University

4University of Cambridge

How good are people at judging the veracity of news? We conducted a systematic literature review and pre-registered meta-analysis of 232 effect sizes from 53 experimental articles evaluating accuracy ratings of true and false news (Nparticipants = 104’064 from 30 countries across 6 continents). We found that people rated true news as more accurate than false news (Cohen’s d = 1.26, [1.13, 1.39]) and were better at rating false news as false than at rating true news as true (Cohen’s d = 0.35, [0.25, 0.44]). In other words, participants were able to discern true from false news, and erred on the side of skepticism rather than credulity. The political concordance of the news had no effect on discernment, but participants were more skeptical of politically discordant news. These findings lend support to crowdsourced fact-checking initiatives, and suggest that, to improve discernment, there is more room to increase the acceptance of true news than to reduce the acceptance of false news.

Authors: Sacha Altay1, Jan Pfänder2

1University of Zurich

2Institut Jean Nicod

Room: E0.22 

Session 2: Fact-Checking Frontiers: Challenges and Insights from the Fight Against Disinformation 

Chair: Stephan Mündges, TU Dortmund University

One of the most important, if not the most important, part of fact-checking is the use of good and reliable sources. Professional fact-checkers adhere to standards such as the International Fact-Checking Network’s (IFCN) Code of Principles, which mandates source transparency. In the past, researchers have highlighted and emphasised that there is a tendency in journalism to rely too heavily on official sources (Carlson, 2017; Fishman, 1980). This could be particularly problematic in the context of fact-checking, as it stands to reason that someone who distrusts the quality of the media and government is unlikely to be convinced by a fact-check whose result is based on information from them. However, to the best of our knowledge, there is no research that focuses on the use of fact-checking sources in the context of different disinformation topics. This paper therefore examines the use of sources by four major German-language fact-checking organisations, AFP, APA, Correctiv and dpa, over a five-year period covering the whole of the Covid 19 pandemic and the start of the Russian-Ukrainian war. As all these organisations are verified signatories of the IFCN Code of Principles, they all link to the sources they use in their articles. Our methodology involves scraping the fact-checks of these organisations, filtering out links directed to the claim or internal pages, and categorising the remaining domains into distinct categories such as quality media, original sources, and government sources. In the next step, we use an LDA-based model that has identified twelve different disinformation topics to determine whether there are variations in the use of particular types of sources between topics. This will allow for the first time a topic-specific evaluation of the type and range of sources used in journalistic fact-checks.

Authors: Nico Hornig1, Jonas Rieger1, Jonathan Flossdorf1, Henrik Müller1, Stephan Mündges1, Carsten Jentsch1, Jörg Rahnenführer1, Christina Elmer1

1TU Dortmund University

Fact-checking has emerged as a response to the vast amounts of unverified information that circulates within the information environment. This prominence is reflected in research studies as well, warranting an examination of which aspects to date have been studied in the context of fact-checking and what the main findings as well as research gaps are. We consulted Scopus, Web of Science, and Ebsco databases for bibliographical data on articles published between January 2010 and April 2023. The search keywords included “fact-checking” or “debunking,” “prebunking” and “disinformation,” “misinformation,” “news verification,” “tools,” “fake news.” The final sample consisted of 673 articles. The analyzed studies cover diverse geographic areas from North and Latin America to Southeast Asia and various disciplines (such as media, communication, medical, political, computer, and information science), illustrating the breadth of attention paid to this concept. Researchers have focused on the infrastructure, practices, genres, and perception of fact-checking, audience reception and testing tools. Fact-checking can allude to verifying information as well as a journalism genre. The general consensus appears to be that journalistic fact-checking can be effective in reducing false beliefs among the audience. However, many obstacles exist that limit the performance of fact-checking for this purpose. Automation of various parts of the information verification process is a popular topic. However, there remains a gap in understanding who would use these automated tools, for what purposes. A minority of studies address fact-checking in the context of media literacy, typically as part of training or as a necessary survival skill in the technology-rich information environment. Fact-checking websites archive false narratives for researchers in a specific region or on a particular topic. Fact-check articles or toolboxes have been utilized in media literacy research to teach information verification.

Authors: Jānis Buholcs1, Maia Klaassen2, Krista Lepik2, Marju Himma-Kadakas2, Auksė Balčytienė3, Sten Torpan2

1Vidzeme University of Applied Sciences

2University of Tartu

3Vytautas Magnus University

War propaganda serves political interests by highlighting and omitting specific information, favouring particular sources and viewpoints, and utilising written and visual elements to forge narratives (Boyd-Barrett, 2016). The objective is to blur the distinction between fact and fiction, whether true or false (Arendt, 1951). For fact-checkers, the challenge is to disentangle the truth from lies by providing a critical and nuanced approach to assess public claims’ veracity or debunk stories spread on social media (e.g., Amazeen, 2015; Mena, 2019). The case of the Russian-Ukrainian war illustrates this purpose well (e.g., Khaldarova & Pantti, 2016; Mejias & Vokuev, 2017).

Our research explores the challenges faced by fact-checkers in the context of the Russian-Ukrainian war through two key research questions: (1) What difficulties do fact-checkers encounter? (2) Do they have sufficient resources and tools for practical work? It draws on a mixed-methods approach, including (1) seven preliminary semi-structured interviews with fact-checkers in Western and Northern Europe, (2) a quantitative online survey conducted during the Global Fact 9 Conference in Oslo in June 2022, gathering responses from 85 fact-checkers across 46 countries, and (3) twenty structured interviews primarily with European-based fact-checkers. Results show that fact-checkers distinguish between soft Ukrainian war propaganda and more aggressive Russian war propaganda. Nonetheless, their primary hurdles revolve around obtaining reliable sources from both sides. Language barriers and physical distance are other challenges exacerbated by the difficulty distinguishing non-manipulated contents used in manipulated contexts. Geographical proximity to the conflict zone improves overall understanding of the situation. Their fact-checks mainly concern images and videos, so they have raised more awareness of technical manipulations among fact-checkers. Moreover, this regular exposure to violent audiovisual content had psychological consequences. Fact-checkers also underlined their need for more time to learn and keep up with online fact-checking tools.

Authors: Laurence Dierickx1, Carl-Gustav Lindén1

1University of Bergen

Although fact-checking is often hailed as the key upholding democracy and against deliberate, malicious information operations (Luengo & García-Marín, 2020) in both democratic and authoritarian regimes, corrective measures as such can also be misappropriated and abused to hinder democracy by malicious actors (Kajimoto, 2023). Precisely, the discursive power of fact-checkers has put these professionals in a rather scabrous position – they do not only ‘struggle over the meaning of truth’ (ibid, p. 406), the products of fact-checking are also manipulated to cast doubts over truth and falsehood  denying, attacking and labelling fact-checkers, as well as spreading verified factual materials to cause harms on others (i.e., malinformation). While studies on disinformation and misinformation have gained academic attention since the 2016 US Presidential Election, there is still a paucity of empirical research on the topic of malinformation, particularly in relations to fact-checking practices situated in East Asia, a unique region containing few of the most Internet- and social media-penetrated societies in the world. Against this backdrop, this study explores why fact-checking may fail in societies where polarization prevails, namely Hong Kong and Taiwan, both with rapidly growing fact-checking industries. Drawing on interviews with fact-checkers at various International Fact-Checking Network (IFCN) member organizations, we examine how fact-checking can be weaponized and manipulated as a means to obtain discursive power in the political arena in conjunction with crucial social movements. With our results, we present the novel theory of Weaponized Fact-Checking (WFC), outlining the two types of weaponization – (1) Polarized Weaponization and (2) Authoritarian Weaponization. Our framework offers a unique typology of WFC, each with different sources, targets, methods and goals. All in all, we offer a speculation of the future of the fact-checking industries in Hong Kong, Taiwan and beyond given the prominence of the WFC globally.

Authors: Wang Ngai Yeung1, LIM Kok Wai Benny2

1Oxford Internet Institute

2The Chinese University of Hong Kong

The Ukraine war was accompanied by an intensification of the information warfare, which provides a new context for analyzing effects of fact-checking in the digital media landscape during wars. Fact-checking organizations effort to counter disinformation includes information spread on social media that are increasingly used in times of political instability (Altay et al., 2022; Iosifidis and Nicoli, 2021; Siwakoti et al., 2021). Our study addresses two research questions: 1) Which fact-checked stories were published by European fact-checking organizations in the outset of the war. Here we focus on whether activities differ by country, type of fact-checking (ingroup vs. outgroup fact-checking) or topic as well as time and on whether fact-checking activities transgress borders. 2) How did the respective Twitter populations in Denmark, Finland, Germany, Italy, Norway, Poland and Sweden engage with and react emotionally to the activities? Sentiments are analyzed using VADER.

We base our study on theoretical argumentations coming from the tradition of Social Identity Theory (SIT) (e.g. Brown, 2000) and have chosen a comparative approach due to differences in media systems, historical and cultural contexts (e.g. Fomina, 2016; Hallin and Mancini, 2004; Syvertsen et al., 2014) and in approaches to fact-checking (e.g. Ferracioli et al., 2022). Studies so far show different public reactions to the war, but whether this also applies to fact-checking is a blind spot so far. First findings indicate that factcheckers published more ingroup stories with small differences by country and topic. We see country differences in topics and a limited extent of fact-checking transgressing borders. Furthermore, Twitter engagement is higher with ingroup fact-checking. Differences in engagement by country show e.g. limited engagement of Nordic Twitter communities. We also find differences in reactions towards ingroup and outgroup fact-checking in Germany, and Italy and across countries with a tendency of outgroup tweets being more negative. In contrast to our expectations all reactions tend to be negative in valence. Implications of these findings will be discussed also in light of studies showing that negative content spreads faster and further on social media (e.g. Vosoughi et al. 2018).

Authors: Jessica Gabriele Walter1, Marina Charquero Ballester1, Ida Anthonj Nissen1, Anja Bechmann1

1Aarhus University

15:15 – 15:45

Room: C10.20

Coffee Break
15:45 – 17:00

Room: C10.20

Parallel Sessions 3 & 4

Session 3: Psychology of Truth: Unveiling Biases, Beliefs, and Misinformation Susceptibility 

Chair: Keith Peter Kiely, Sofia University

There has been rising concern about individuals who base their attitudes on factually false information (Kuklinski et al., 2000; Rojecki & Meraz, 2016). Susceptibility to disinformation has is often measured by the number of correct answers in survey-based trials (e.g., Leyva & Beckett, 2020). These accuracy measures confound two very different explanations for answering a certain way. Good performance can stem from the participants’ ability to distinguish false from true items e.g., due to knowledge on the subject. However, participants may also (not) believe specific pieces of information because they are ideologically biased. Signal detection theory (SDT) allows to dissect these accounts by calculating two separate measures for cognitive skill (sensitivity) and biased judgment (response bias). Psychology scholars and social scientists have only recently begun to incorporate SDT in studies on disinformation (e.g., Batailler et al., 2021). We calculate SDT measures as described by Stanislaw and Todorov (1999)  from original online survey data collected in 19 countries (N = 19,000) during the early stage of the Russian war in Ukraine in 2022. Sensitivity describes the ability of recognizing true and false items, while bias measures the tendency to agree with either pro- or anti-Russia leaning statements, irrespective of their truthfulness. We find that different predictors come to play on the individual and country level. E.g., lower accuracy scores of Russia-favoring individuals can be explained by their significantly stronger response bias. Better performance of respondents with a higher need for cognition can be attributed to higher levels of sensitivity. On the country-level, press freedom does not seem to matter in accuracy ratings, is however associated with higher levels of sensitivity whereas response bias is not affected. We believe applying SDT to a highly relevant case will deepen the understanding of individuals’ and countries’ susceptibility and resilience to mis- and disinformation.

Authors: Luisa Gehle1, Chistian Schemer1, Michael Hameleers2, Marina Tulin2, Claes de Vreese2

1University of Mainz

2University of Amsterdam

Past experimental research on how people process and validate new information has shown that people are truth-biased, i.e. prone to believing misinformation, even if this is explicitly tagged as being false (Pantazi et al, 2018; 2020). If citizens have a similar default tendency to believe misinformation they happen to consume online, this would suggest very dire effects of misinformation on citizens and societies. In two studies we assessed whether people tend to believe fact-checked fake and true news. In Study 1 (N = 89 Belgian Students) participants expressed their belief in fact-checked true and false news that came either from reliable (i.e. a reputable news channel) or from unreliable sources (e.g. an anonymous social media user). Unlike past research, participants were overall false-biased (i.e. more likely to disbelieve true news than to believe false ones). This effect was moderated by source trustworthiness: participants were more likely to believe false and disbelieve true news when these came from reliable and unreliable sources respectively. In Study 2 (N = 300 US Prolific users; 150 Republicans, 150 Democrats) participants expressed their belief in fact-checked true and fake news that were either congruent or incongruent with their ideology. Again, participants were overall false-biased, yet this effect was fully moderated by ideological congruence: participants were false-biased (i.e. tended to disbelieve) towards ideologically incongruent news and truth-biased (i.e. tended to believe) towards ideologically congruent news. This effect of ideological (in)congruency was further moderated by participants’ analytical thinking: high analytical thinkers were more false biased in incongruent statements but were not truth-biased in congruent statements. This pattern suggests that high accuracy in fake news detection by analytical thinkers (Pennycok & Rand, 2018) is selectively owing to disbelief toward fake news but not to increased belief in true news.

Authors: Myrto Pantazi1, Habiba Bouali2, Olivier Klein2, Régine Kolinsky2, Felicitas Flade3

1University of Amsterdam

2Université libre de Bruxelles

3University of Mainz

This study introduces the “Truth-Plausibility Model” to explore how individuals process political misinformation. Unlike most studies, we differentiate between “truth evaluations” and “plausibility evaluations,” arguing that these can be distinctly measured and conceptually separated. Our findings reveal that truth judgments rely on stringent evidence criteria and objective standards, whereas plausibility judgments are influenced by subjective beliefs and biases.

We show preliminary findings from three pre-registered studies providing evidence supporting our model. The first study confirms that individuals make separate assessments of truth and plausibility, recognizing that misinformation can be perceived as either plausible or implausible. The second study finds that people are more likely to share news they deem false yet plausible compared to news considered both false and implausible. The third study identifies political alignment as a key factor in plausibility assessments, overshadowing objective truth. Our “Truth-Plausibility Model” offers new insights into the complexities of engaging with fake news and highlights the importance of understanding these nuances for developing effective anti-disinformation measures. We conclude by discussing the implications of our findings in the context of belief in the post-truth era.

Authors: Andrea De Angelis1, Moreno Mancosu2, Federico Vegetti3

1University of Zurich

2Collegio Carlo Alberto

3University of Torino

Prior misinformation research often lacks comparisons with the processing of true information and specifically focuses on the dangers of right-wing misinformation. However, it is not yet clear whether rightists are generally more susceptible to misinformation (e.g., Arendt et al., 2019) or whether vulnerability increases with extremity on both sides of the political spectrum. Against this background, the goal of this research is to provide a more comprehensive view on the factors that promote or inhibit engagement with online (mis)information across the political spectrum. Based on signal detection theory, we differentiate between the ability to discern true from false (discrimination sensitivity) and the tendency to prefer belief-congruent messages (confirmation bias) and reject belief-incongruent messages (disconfirmation bias), regardless of veracity (Batailler et al., 2022).

As pre-registered hypotheses, we assumed that political extremity, right-wing orientation, dark triad personality traits and the use of social networking sites and instant messengers decrease discrimination sensitivity and increase partisan biases. As protection factors, we expected that cognitive abilities, media literacy, intellectual humility, and need for cognition increase discrimination sensitivity, with the latter two also reducing partisan bias. In an online experiment (N = 992), participants rated 16 news posts (true vs. false, supporting left-wing vs. right-wing views) with regard to perceived credibility, likelihood of further reading and sharing. On the basis of hits (true news regarded as true) and false alarms (false news regarded as true) among belief-congruent and incongruent articles, discrimination sensitivities and (dis)confirmation biases for perceived credibility were calculated. Results identified dark triad traits and social media usage as risk factors that were connected to lower discrimination sensitivity. With regard to political orientation, bias occurred on both sides of the political spectrum: Rightists exhibited a stronger confirmation bias, while leftists were more likely to reject belief-incongruent messages even if they were true (higher disconfirmation bias).

Authors: Stephan Winter1, Sebastián Valenzuela2, Marcelo Santos3, Tobias Schreyer1, Lena Iwertowski1, Tobias Rothmund4

1University of Kaiserslautern-Landau (RPTU)

2Pontificia Universidad Catolica de Chile

3Universidad Diego Portales

4Friedrich-Schiller-University Jena

This contribution tackles the cognitive biases at play in news-making or fact-checking. It is well known that the advent of the Networked Society has radically changed the way we access (mis)information, opening doors for cognitive shortcuts to cope with the fast-paced proliferation of information. As a result, journalists might unintentionally craft misleading news or publish an inaccurate factcheck, undermining audience’s trust in their gatekeeping process. But what cognitive biases affect news making and fact-checking? To what extent are they dependent on platforms’ affordances or intercultural contexts? We answer these questions through a three-tiered approach. First, we carry out a large-scale corpus analysis of scholarly articles about ‘bias’ in news-making/fact-checking through the Scopus API (3407 retrieved abstracts). We combine the emerging biases with those traditionally identified for ISR (Information Seeking/Retrieval), to account for challenges imposed by the digital, obtaining a cheat-sheet of 11 biases. We then evaluate its relevance through 3 rounds of face-to-face focus groups (6 participants each, 2 hrs), involving practitioners from different nations and organizations. The focus groups consist in a simulation of a fact-checking process and a post-hoc discussion. The fact-checking simulation is divided into 3 phases: news selection among 8 fictive news, evidence retrieval, and fact-check report writing. The post-hoc discussion focuses on rating the importance of biases in the cheat-sheet. The preliminary analysis of the results shows that i) type of journalism rather than cultural affiliation is a determinant factor ii) confirmation bias and availability bias are perceived as most frequent. Results in ii) are being corroborated through a quantitative survey. In the frame of the EMIF LATIF  project (https://latifproject.eu/) the results will be used to inform the design of a digital tool to help journalists debias their decision-making processes.

Authors: Elena Musi1, Lorenzo Federico2, Mariavittoria Masotina1, Simeon Yates1

1University of Liverpool

2Luiss Guido Carli University

Room: E0.22 

Session 4: Innovative Methodologies: Pioneering Approaches in Disinformation Research

Chair: Katjana Gattermann, University of Amsterdam

A commonly understood counter to mis- and disinformation spread on digital media is the availability of reliable information from high-quality journalist news sources. These journalistic news sources are said to play a role in prebunking and debunking false or misleading information. However, in August 2023, Meta began to limit the visibility of news content for Canadian users on two of the platforms most commonly used for political information gathering in Canada: Facebook and Instagram. In this paper, we evaluate two possible consequences of the removal of journalist-produced content on the overall Canadian information ecosystem. We ask: 1) does the overall information quality on political discussions on Meta platforms decrease? And 2) is this shift in information quality associated with a decreased volume of activity on Meta platforms and (a corresponding) increase in volume on other social media? To respond to these questions, we collected a large-scale multi-platform dataset of Canadian political content from Facebook, Instagram, YouTube, and TikTok. Our initial evaluation indicates a significant drop-off in external linking on Meta platforms, resulting in more insular and less informed conversation. We also observe a small rise in linking to known disinformation-disseminating websites who were unaffected by the ban. We do not witness any increase in political activity on platforms that continue to allow linking, suggesting that citizens are simply accepting a lower volume of news exposure. We will continue to collect data until the end of the calendar year as we observe behaviours still changing in response to this major platform decision. The reduced availability of journalism in social media spaces is likely to contribute to a less informed citizenry and a less responsive democracy.

Author: Taylor Owen1

1McGill University

This study focused on exploring how individuals discern and negotiate contending truths in online narratives, specifically in the context of the Russia-Ukraine war through Hungarian comments (N= 1203) on Facebook about the Bucha massacre. Using Discourse Historical Analysis (DHA; Wodak, 2015), the study examined social-epistemic rhetoric online, uncovering four distinct epistemic patterns: polarization, historical, epistemic authority, and agency. The polarization pattern reflected binary viewpoints, the historical pattern involved collective memories influencing perceptions, the epistemic authority pattern showed reliance on external authorities or personal experiences, and the agency pattern revealed the tension between confidence and doubt in interpreting narratives. These patterns provide insights into how commenters, who represent a third party in a war, navigate complex geopolitical narratives. The study advances our understanding of online epistemologies and emphasizes the importance of discursive research in deciphering misinformation dynamics during international conflicts. These patterns, while offering an intricate mapping of Hungarian online interaction strategies amidst genuine and misleading narratives, also lay the groundwork for further computational social science methods. During the conference presentation, the potential of integrating qualitative insights from DHA with computational methodologies for large-scale analyses will be explored. . This interdisciplinary approach aims to deepen our understanding of online narratives and the complex dynamics they foster. Given Hungary’s distinctive stance within the EU, marked by a proliferation of disinformation and a notable pro-Russian sentiment more pronounced than in most EU countries, this research is particularly timely. Such a context underscores the importance of examining how digital discourse is shaped and navigated within environments rich in disinformation, and with sharp political polarization. The prospective fusion of qualitative and computational analyses, this research delivers pivotal insights for platforms, researchers, and policymakers, enhancing our grasp on digital disinformation’s multifaceted landscape.

Author: Zea Szebeni1

1University of Helsinki

While fact-checkers measure the degree of a claim’s veracity, data scientists leverage APIs and web scraping techniques to measure the level of attention paid to that claim as it appears in various places online. Our presentation will discuss this latter aspect in the study of disinformation, specifically within the context of De Facto, the French EDMO hub. In addition to a demonstration of our automated claim-enrichment workflow, we will showcase several analyses made possible thanks to the data collection and data modeling tools developed for De Facto. During the course of the project, we leveraged, enhanced, and developed a suite of open-source tools that both collect metadata about claims’ content and propagation on social media (Plique et al., 2019; Christensen, 2023b) as well as restructure that information in the familiar ClaimReview schema widely used by the fact-checker community (Christensen, 2023c). It is this combination of data-mining and data-modeling that bridges the gap between the copyrighted work of fact-checkers and the open-source work of data scientists, lending greater transparency to the study of disinformation. We will demonstrate an open-source, automated workflow that takes in an RSS stream of fact-checked claims and outputs a version enriched with metadata from a variety of social media platforms. Additionally, the workflow produces easy-to-read spreadsheets that facilitate further analysis of this information, examples of which will also be presented. Finally, we will present some of the challenges to structuring social-media data in the more public-facing, accessible ClaimReview schema. Some of our solutions to these challenges, which impact many databases of fact-checks, include the invention of new schema properties and types, on which we welcome input from the research and fact-checking community (Christensen, 2023a; Goupil et al., 2023).

Author: Kelly Christensen1

1Sciences Po

The problem of misinformation in Southeastern Europe is significant (Cruz, 2021; Blanuša et al., 2022, Nelson, 2022) and it threatens trust in public institutions and the stability of democratic processes in the region. Nevertheless, research on misinformation in Southeastern Europe is limited and conducted mainly by fact-checking organizations and NGOs (Atlantic Council of Montenegro, 2020; DFC, 2021; Murić et al., 2022;), thus lacking a deeper understanding of the dynamics of misinformation in this context. To address this research gap, a social network analysis (SNA) was conducted focusing on the misinformation networks surrounding the Russian-Ukrainian conflict from February 22, 2022 to September 30, 2023. The original sample used for this analysis was compiled from fact checks (N=373) published by the SEE Check network, which includes fact-checking organizations from Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, and Serbia.
The analysis revealed a cluster format of groups around leading misinformation sources characterized by a broadcast or star-shaped network, indicating a lack of closely connected communities. Within this network, misinformation was disseminated mainly by sources such as state media in the Bosnian Serb Republic (rtrs.tv), mainstream media in Serbia (mondo.rs, b92.rs), pro-Serb channels in Montenegro (Borba.me and in4s.net), Russian sources (vostok.rs), and pseudo-media platforms (paraf.hr, logicno.com, epoha.com.hr). A significant number of actors involved in spreading misinformation exhibited characteristics commonly associated with troll activity via fake accounts.

The study contributes to the understanding of the SEE misinformation networks and the main actors related to the Russian-Ukrainian war, and enables the development of methods for mapping these networks as a strategy to combat the spread of misinformation.

Authors: Mato Brautovic1, Romana John1, Sandra Buratović Maštrapa1

1University of Dubrovnik

Defining disinformation is essential to operationalize concrete measures to identify and remove harmful online content. In this article, we analyze the ways in which disinformation is defined at the European level focusing on three levels of analysis: (1) a macro-level where the EU’s Digital Services Act and national legislations play a fundamental role in setting the legal conditions under which disinformation can be tackled; (2) a meso-level where co-regulatory frameworks, most importantly the Code of Practice on Disinformation, mediate between platforms and public institutions and between hard and soft law; (3) a micro-level where Very Large Online Platforms employ privacy policies that ultimately operationalize in practice the governance of disinformation online. By doing this we analyze the definitions of VLOPs such as Facebook, Instagram, Titter/X, TikTok and Youtube. We show how mainstream platforms embrace a broad conceptualization of disinformation and operationalize it mainly through reducing content visibility (i.e., demotions) to avoid removing disinformation and take full responsibility. By analyzing these different yet intertwined levels, we also discuss the strategies that VLOPs deploys to tackle disinformation as well as the challenges and opportunities of defining disinformation and thus operationalizing its governance at the European level.

Authors: Urbano Reviglio1, Konrad Bleyer-Simon1

1European University Institute

17:15 – 18:30

Room: C10.20 

Parallel Sessions 5 & 6 & Match-Making Session 1

Session 5: Disinformation Dynamics: Dissecting Factors Influential to Spread and Reception Amongst Diverse Populations 

Chair: Elske van den Hoogen, University of Amsterdam

Disinformation has become one of the most relevant problems in European democracies. The open nature of digital platforms means that anyone can spread malicious content, launching apocalyptic messages to sow fear and division among citizens. In this sense, constant exposure to disinformation can have harmful effects, such as feeling confused about daily life issues. This research is exploratory in nature and aims to discover how false information has been received through MIMS, Facebook and Twitter and what sociodemographic factors have a stronger influence on the effects generated by the disinformation by citizens of three countries. To do this, an online survey (n = 3019) was developed for citizens of Spain, Germany and UK. The sample is stratified according to the gender, age, income, and ideology of the respondents. The results show that the reception of false information is high in all three countries, especially on Facebook. Additionally, we found that the country of origin, genre, age and ideology influence the reception of disinformation in MIMS, but not in the rest on platforms. Considering disinformation’s effects on citizens, we observe how, in general terms, those surveyed perceive disinformation effects with a medium-low intensity. In this way, citizens do not believe that false information causes substantial changes in their thinking. An increase in mistrust has been detected towards social media and mainstream media, that are not considered reliable sources of information. At this point, the country of origin, income, and ideology of the respondents are conditioning factors. Despite being a descriptive study, this research provides some relevant trends that help to better understand the effects of disinformation in three countries with different political and social traditions. This work is part of the R&D project with reference AICO/2021/063 financed by the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital of the Generalitat Valenciana.

Authors: Laura Alonso-Muñoz1, Alejandra Tirado-García1, Andreu Casero-Ripollés1

1Universitat Jaume I de Castelló

Disinformation and hate speech are two of the main problems contemporary societies face. Their study has frequently been parallel, but the particular problems that arise from the joint presence of both phenomena have received less attention. Despite the general consensus that not all people are equally likely to believe or share these messages, experimental approaches that can verify the moderating role of different factors are more limited. Under these premises, an experiment with 404 Spanish adult participants, who were shown messages about migration with a 2X2 design (falsehood vs. truthfulness and hate vs. non-hate) was conducted. The aim was to verify how ideology, gender, age, educational level, the number of inhabitants of the place of residence, income level, and previous attitudes towards immigration affect the mechanism that leads to sharing these messages. It is observed that the decrease in the intention to share false and/or hateful content only occurs in people with left-wing ideologies and in those who did not have previous negative attitudes towards immigration. Among these groups, the presence of hate and/or falsehood lessens believability, which in turns decreases the intention to share. The rest of factors do not play a significant role, which seems surprising especially in relation to the educational level, which seemingly does not help better identify false or hate messages. Furthermore, all observed effects occur to a greater extent associated with content that includes hate than with messages that only include falsehood. Thus, complementary tests were carried out in which falsehood acted as a moderator of the effect of the presence of hate on believability and sharing intention, showing that hate content is more likely to be believed and shared when it is combined with falsehood.

Authors: David Blanco-Herrero1, Damian Trilling1, Carlos Arcila-Calderón2

1University of Amsterdam

2University of Salamanca

In recent times, vaccine hesitancy has become associated with social media platforms and the anti-vaccination movement. Specifically, the dissemination of misinformation on social networks has been linked to a lack of adherence to COVID-19 public health guidelines. Increased exposure to unreliable news articles regarding COVID-19 vaccines has been found to contribute to higher levels of vaccine hesitancy and reduced vaccination rates at both state and county levels in the United States. Laboratory experiments have also demonstrated that exposure to online misinformation amplifies vaccine hesitancy. This poses a significant challenge during vaccination campaigns, as the formation of clusters holding anti-vaccination beliefs can impede the achievement of herd immunity within a population. Effectively managing epidemic crises in the present era necessitates a comprehensive understanding of the intricate interplay between the dissemination of (mis)information on online social networks and the transmission of diseases via physical contact networks. Although prior theoretical agent-based simulations have demonstrated how misinformation can impede epidemic control in diverse manners, there is an increasing demand to incorporate real-world data to enhance the alignment between simulation outcomes and real-world consequences. We address this challenge through a novel Susceptible-Infected-Recovered (SIR) model that accounts for a subpopulation of “misinformed” individuals, who do not heed expert public health guidance. We explore how this group can affect the larger, ordinary population using both a mean-field approximation, which assumes all individuals have an equal chance of interacting, and a multi-level agent-based simulation based on a large, data-informed contact network of 20 million nodes constructed by leveraging large-scale Twitter data, county-level voting records, and cellphone mobility data. We incorporate theoretically extreme parameters to evaluate best- and worst-case scenarios about the impact of misinformed individuals on the spread of disease and obtain quantitative bounds on the harm caused by misinformation.

Authors: Francesco Pierri1, Matthew R. DeVern2, YY Ahn2, Santo Fortunato2, Alessandro Flammini2, Filippo Menczer2

1Politecnico di Milano

2Indiana University

What socio-psychological traits distinguish individuals who share fake news on social media from other users? While experimental research has shown that fake news sharers are less inclined toward analytical thinking (Pennycook and Rand, 2019; 2020; 2021), computational analyses have suggested that they tend to lean more conservative (Grinberg et al., 2019; Guess et al., 2019), and are primarily motivated by political agendas (Osmundsen et al., 2020). As debates between these two approaches continue to liven up the scientific community, it’s important to note that the majority of research has predominantly focused on the United States, where the media landscape is highly polarized (Benkler et al., 2018). Consequently, there is limited understanding of the propagation of fake news in other national contexts characterized by non-binary party systems. To address this gap, we conducted an analysis using classic methods of ideological inference (Barberá et al., 2015) on the French Twittersphere, employing a matching procedure (Rubin, 1973). Our study encompassed a dataset of 4 million tweets and compared the characteristics of 1,908 fake news sharers with those of 957 users who have not shared fake news but share the same political stance within the French Twittersphere. Our findings indicate that fake news sharers are significantly more likely than other users to: (1) emphasize their political affiliation in their Twitter bios, (2) use pseudonyms, (3) share media content, (4) engage in hyperactive (re)tweeting, (5) express negative emotions; (6) still, their language does not denote less analytic thinking. Overall, our study supports the notion that the sharing of fake news is primarily driven by anger and partisanship rather than sheer ignorance. It encourages future research to delve into more intricate aspects of fake news sharers’ political identities, such as party memberships and activist practices.

Authors: Manon Berriche1, Jean-Philippe Cointet1, Anne Sophie Hacquin2, Sacha Altay3, Béatrice Mazoyer1, Benjamin Ooghe-Tabanou1, Hugo Mercier2, Dominique Cardon1

1Sciences Po

2Institut Jean Nicod

3University of Zurich

Room: E0.22 

Session 6: Methods of Misleading: Uncovering Disinformation Tactics and Actors 

Chair: Michael Hameleers, University of Amsterdam

Much is made of the prevalence of information warfare amid a rapidly evolving media landscape, alongside concerns about the wider phenomenon of disinformation. Yet the interactions between domestic and foreign actors in the weaponization of disinformation still suffer from the absence of a comprehensive scholarship. This paper delves into these interactions, attempting to understand the role of diverse actors in elevating disinformation into weaponised narratives. Specifically, we examine the exploitation of disinformation narratives emanating from Russia regarding the Ukraine war by domestic actors in Western democratic societies, taking the QAnon movement as an example. Through a comparative case study of the role of QAnon adherents in the United States and Germany, we seek to render visible the underlying motivations, strategies, and consequences of such disinformation weaponisation by domestic actors on both sides of the Atlantic. To this end, we analyse social media political discourse. We first identify X accounts linked to QAnon in Germany and the United States through open-source databases and keyword searches. We then undertake a mass content analysis of their output using R language programming to examine the extent to which, and how, Russian-born disinformation is weaponised by these actors. We further draw on X’s API to investigate the consequences of this weaponization, illuminating the connections between QAnon accounts and other actors. Preliminary findings indicate that domestic actors do exploit Russian disinformation about the war for their own ends, including the intensification of political polarisation and influencing of electoral outcomes. Further, these actors are capable of moving disinformation narratives from the periphery closer to the centre of public discourse in both countries. Overall, we underscore the intricate interplay of factors contributing to the weaponization of disinformation, such as ideological affiliations, the influence of emerging technologies including artificial intelligence, and the vulnerabilities inherent to late capitalist democracies.

Authors: Mahmoud Javadi1, Thu Nguyen Hoang Anh2, Raphael Cannell3, Clarke Sumbule Wafula3

1Erasmus University Rotterdam

2National University of Singapore

3European University Institute

Despite disinformation becoming a salient topic in recent years, fake advertising as a form of commercial disinformation has received limited attention in social scientific research. Fake advertising attempts to trick consumers into buying fraudulent products or share sensitive personal information, often by impersonating a genuine brand or influential person. To help consumers identify trustworthy advertising, disclosures can accompany real ads to proclaim their authenticity. However, these can be easily fabricated. Blockchain-based timestamps can address this issue, by ensuring data security and providing audit trails showing when, where, and by whom the ads were created.

Drawing on signalling theory, heuristics, and content labelling literature, we investigate how blockchain-based disclosures should be designed to effectively communicate advertising authenticity. In three online experiments, we compare effectiveness of disclosures varying in blockchain disclosure type (none, informative, evaluative) and brand mention (no mention, brand mention) from a cross-platform perspective. Preliminary results show that on Instagram, brand-only disclosures (without blockchain mention) are perceived as more understandable, and lead to more positive ad and brand attitudes, and higher brand credibility perceptions than no disclosure. On Google, however, ads with evaluative blockchain disclosures perform better on ad credibility, brand credibility, and brand attitudes. Additionally, blockchain mention, brand mention, and platform type have no effect on disclosure understandability and trust perceptions. These findings suggest that while blockchain-based disclosures can positively affect ad- and brand-related variables, exactly which communication strategies are most effective in persuading consumers of an ad’s authenticity still requires further investigation. This paper contributes to research on disinformation and content labelling by being one of the first to bring attention to fake advertising as a form of disinformation, recommend blockchain technology as a tool to help consumers distinguish between real and fake advertising, and investigate how blockchain-based disclosures should be designed to effectively communicate an ad’s authenticity.

Authors: Dasha Antsipava1, Eva van Reijmersdal1, Joanna Strycharz1, Guda van Noort1

1University of Amsterdam

Political debate increasingly occurs on online platforms. While this provides opportunities to increase popularity of certain ideas, reach new audiences and engage more directly with political actors, it also poses threats to democratic processes (Persily & Tucker, 2020). Undesirable practices, such as political disinformation and deceptive advertising, may raise populistic moods, increase polarisation and interfere with fair elections. Salient examples, such as the Cambridge Analytica scandal and the riots on the US Capitol, have raised political and regulatory scrutiny of how online platforms shape digital public spheres, from the design of their algorithms and (ab)use of personal data, to speech restrictions and advertising rules (Gillespie, 2019). The European Union (EU) has been particularly active in the past five years, revising limited liability rules for internet intermediaries, and passing new legislation on market access, content moderation, data governance, and more recently also, artificial intelligence and political advertising (Richter et al, 2021). In these regulatory debates, transparency is high on the agenda. Frustrated by the lack of access to data, the academic community and civil society have pushed for meaningful transparency (Vermeulen, 2019; Leerssen, 2020) in order to monitor the governance of online platforms and understand the nature and extent of platforms’ influence on democratic challenges. The active participation of online platforms in self-regulatory processes, such as the revision of and commitment to the Code of Practice on Disinformation (Multistakeholder Group, 2022), demonstrate that transparency is an objective to which everyone – in principle – agrees. Pertinent questions arise then: what does transparency mean for these stakeholders, transparency for whom and to which purpose, and with which consequence. The aim of the paper is to investigate (RQ1) how ‘transparency’ in political advertising is understood by the EU and online platforms, and (RQ2) what the projected responsibility of platforms is, in a context of ongoing political and policy debates on regulating online platforms. We compare two ongoing EU policy initiatives (the revised Code of Practice on Disinformation (Multistakeholder Group, 2022) and the proposed Regulation on Transparency and Targeting of Political Advertising (European Commission, 2021) with platform policies (community standards) and practices (platform design) undertaken to moderate political actors and advertising. After briefly reviewing literature on digital public spheres, platform accountability and transparency in the context of online political advertising, the paper traces the concepts of ’political advertising’, ‘political actor’, ‘transparency’ and the projected platform responsibility in the aforementioned EU policy initiatives, compared against the policies and practices of several platforms (Google, Mastodon, Meta, Microsoft, Telegram, TikTok, Twitter/X). We argue that the concept of transparency is used as an ‘empty signifier’: meaningful at the political and declarative level but when translated into practice, leads to diverse results. The paper seeks to contribute to academic and policy debates on platform and political advertising transparency in light of the upcoming European elections.

Authors: Trisha Meyer1, Agnieszka Vetulani-Çegiel2

1Vrije Universiteit Brussel & EDMO BELUX

2Adam Mickiewicz University

Amidst rampant concerns about disinformation, social media influencers (SMIs) can capitalize on their often enormous outreach to spread false claims among their followers. However, despite their sizable potential, the extent to which SMIs sow discord and endorse false narratives is uncharted territory. In this paper, we explore the scale at which SMI engage with misinformation. We begin by gathering posts from English-speaking influencers with over 500,000 followers on Instagram using CrowdTangle. We then identify instances of disputed content by cross-referencing posts with verified false claims from Politifact. This research is pioneering in providing empirical evidence on SMIs’ participation in spreading falsehoods. Yet, we find that the concerns are exaggerated, as the involvement of SMIs in propagating false claims is minimal, with only 0.003% of the more than 1.3 million posts analyzed actually supporting false statements.

Authors: Emma Hoes1, K. Jonathan Klüser1

1University of Zurich

In March 2022, in the wake of Russia’s direct military invasion of Ukraine, YouTube took a significant step by blocking channels associated with Russian state-funded media on a global scale. This decision was rooted in YouTube’s policy against content that downplays or trivializes events marked by violence. Beyond this, YouTube leveled accusations against several Russian channels, alleging they disseminated misleading narratives about the actions of Ukraine’s leadership and the unfortunate civilian casualties during the ongoing conflict. A closer look at the Belarusian state-controlled YouTube channels reveals a strategic shift in their content, seemingly tailored to cater to the Russian audience. This raises suspicions. Could the sophisticated Russian propaganda machinery, equipped with tech-savvy experts, have devised methods to circumvent YouTube’s recommendation algorithms? This concern is echoed in Sander van der Linden’s insightful book “Foolproof” Linden points out: • Algorithms can push users towards extreme views. • Echo chambers make fact-checks hard to spread. Drawing from these observations, it’s evident that Belarusian YouTube channels, under the aegis of their government, are consistently violating international sanctions. They are actively broadcasting content that not only spreads misinformation about Russia’s aggressive actions in Ukraine but also glorifies acts of violence against the innocent Ukrainian populace. Data from the first quarter of 2023 provides another dimension to this narrative. While Instagram accounts of Belarusian state media seem to remain neutral in the ongoing information warfare spearheaded by Russia against Ukraine, their YouTube and Facebook counterparts are deeply entrenched in it, amplifying disinformation and echoing Russian state narratives. Interestingly, despite their active presence on Facebook, these channels witness a stark lack of engagement. This can likely be attributed to Russia’s decision in 2022 to restrict Facebook access within its borders. As 2023 unfolds, there are growing indications that Russia might extend similar restrictions to YouTube.

Author: Mikhail Doroshevich1

1Digital Skills Coalition Belarus

Room: De Brug

Match-Making Session 1

Chair: Jessica Gabriele Walter, Aarhus University & EDMO

18:30 – 21:15

Room: De Brug 

Pre-Dinner Borrel and Dinner

Day 2 – Tuesday, 27 February (08:45 – 14:30 CEST)

08:45 – 09:00

Room: C10.20 

Coffee and Registration
09:00 – 10:15

Room: C10.20

Parallel Sessions 7 & 8

Session 7: Characterising Disinformation: Trends, Topics and Narratives 

Chair: Jessica Gabriele Walter, Aarhus University & EDMO

The construction of an illiberal media system in Hungary has been the subject of many researchers. However, fewer publications explore how disinformation is altering public discourse in Hungary. A noteworthy aspect of disinformation in Hungary is its frequent origination and distribution by pro-government media, which is quite unusual compared to other EU countries where fringe media typically handle disinformation dissemination. The Russian war against Ukraine provides a clear example of how the disinformation ecosystem functions in Hungary and underscores its harmful effects on democratic public discourse. With our case study on the Ukrainian-Russian war disinformation in Hungary, we would like to show the narratives of the Hungarian pro-government press. This study primarily relies on a content analysis of relevant programs on the M1 public television channel, in addition to 3 major rural print dailies, complemented by insights from public opinion polls to inform our analysis. Public opinion research clearly illustrates how these narratives have influenced the views of the Hungarian public over time. Polls effectively capture the changing opinions of the Hungarian public. With our research, we want to describe that public service media plays a significant role in spreading disinformation, and we have identified several false narratives in the first year since the war began. Pro-government media consistently conveyed to their audience that economic challenges and rising energy prices resulted from EU sanctions, not Russia’s aggression. We complement this by analyzing local print newspapers, which play a significant role in informing the rural population, to get a more accurate picture of how pro-Kremlin narratives are disseminated in the Hungarian pro-government press. As researchers of the HDMO hub, it is essential for us to report on disinformation trends in Hungary, especially when they are disseminated by the Hungarian government and its affiliates.

Authors: Ágnes Urbán1, Kata Horváth1, Gábor Polyák1

1Mertek Media Monitor

In post-truth times, we are experiencing unprecedented challenges arising because of the disturbed information ecosystem. These challenges have direct societal and political implications. Though impact of disinformation on public opinion, democracy, and social cohesion may seem obvious per se, disinformation and its impact must be analysed from multiple perspectives. The focus of this paper is Kremlin disinformation targeting the Baltic region, in particular. In this paper, I analyse Kremlin disinformation spread through various media outlets. My research corpus consists of texts available at the EUvsDisinfo database and texts collected from other compromised media outlets targeting the audiences in the Baltic region. I apply an interdisciplinary approach to study both the content plane and the process plane of disinformation within the framework of the Appraisal theory and Kremlin propaganda and disinformation analysis models (Ben Nimmo’s 4D Model, The Firehose of Falsehoods Model (Paul and Matthews 2016), and Jowett and O’Donnell’s model 2014). The findings indicate that the emotive and subjective language plays a tremendous role in meaning construction and dissemination of disinformation in contemporary densely mediated and technologised communication.

Author: Viktorija Mažeikienė1

1Mykolas Romeris University

This conference paper’s primary aim is to examine the portrayals of women in articles and narratives flagged by fact-checkers, journalists, and researchers as containing disinformation targeting European countries. Our focus is on representations that depict women in two roles that can be seen as contrasting. Firstly, as the vulnerable victims of a global movement that challenges their roles as mothers, caregivers, and defenders of traditional family and Christian values, and secondly, as the leaders of this movement, often advocating for gender equality and women’s rights. We argue that deceitful allusions to a powerful aggressive feminist wave that seeks to indoctrinate women to forget their ‘real nature’ and emasculate and oppress men cause serious harm to efforts to uphold human rights, and women’s rights in particular. Among the disinformation and misinformation narratives we study are those attacking Western culture, EU institutions, as well as specific international documents such as the Council of Europe Convention on Preventing and Combating Violence Against Women and Domestic Violence (the Istanbul Convention) and the Council of Europe Convention on the Protection of Children Against Sexual Exploitation and Sexual Abuse (Lanzarote Convention). However, we also concentrate on cases of disinformation related to significant recent events such as the COVID-19 pandemic and the war in Ukraine. Covering a five-year period from 2018 to 2023 and employing desk research and discourse analysis as research methods, the paper also explores how misinformation and disinformation invoking outdated stereotypes of women can have a negative influence on democratic processes, public opinion, the political representation of women, and policies and legislation aimed at promoting women’s rights and increasing gender equality. We also suggest potential directions for future research and ways to integrate gender and intersectional perspectives into studying the impact of misinformation and disinformation on women.

Author: Gergana Tzvetkova1
1Ca’ Foscari University of Venice

The subject of Bulgaria and Romania’s accession to the Schengen area has become a prominent topic in the media in two countries in the last year, due to the constant opposition of Austria, despite the compliance of the countries with the criteria to join the free circulation area. The topic has given birth to a series of misleading narratives (from mild disinformation narratives to full fledged conspiracy theories), especially in the extreme right wing parties’ discourses and alternative media, but also in public discourse of various political actors. In this context, as part of BROD EDMO hub’s research agenda, we plan to analyse the most prominent disinformation narratives about NATO in the two countries covered by BROD and their potential effects of fostering Eurosceptic feelings. In a first step we will conduct an automated content analysis of Facebook public posts (using CrowdTangle) and online media listening in both countries (using Sensika in Bulgaria and Zelist in Romania). We will perform network and cluster analysis to map the most viral disinformation stories about the Schengen area accession topic, and a qualitative analysis of the most prominent disinformation narratives in both countries, with a focus on the cognitive components of disinformation. In a second step, we plan to test effects of such narratives on people’s further engagement with the topic and increase of Eurosceptic attitudes by means of a comparative 3×2 in between experimental design, manipulating the source (social media vs. mainstream media vs. alternative media) and the facticity of the content (accurate facts vs. disinformation). We will use the most prominent stories identified in the automated content analysis to construct the stimuli, conceived as online newspaper posts and social media posts. We expect to find effects of increased Eurosceptic attitudes and increased engagement with the posts after exposure to disinformation content.

Authors: Nicoleta Corbu1, Dan Sultănescu1, Todor Galev2, Mădălina Boțan1, Keith Kiely3, Andy Stoycheff4, Patrik Szicherle5

1National University of Political Studies and Public Administration (SNSPA)

2Center for the Study of Democracy

3Sofia University

4NTCenter

5GLOBSEC

As a response to the proliferation of disinformation and ‘disrupted’ (Bennett & Pfetsch, 2018)  public spheres, fact-checking units have emerged worldwide. Currently, 417 organizations are active in 100 countries (Duke Reporters’ LAB). The primary work of these organizations consists of a) verifying (political) statements made by public figures (fact-checking) or b) exposing online falsehood by anonymous sources on the Internet (debunking). However, several studies have identified a new trend toward debunking (Cazzamatta & Santos, 2023; Graves et al., 2023). This project proposes a comparison of verification articles among eight countries and 23 organizations in Europe and Latin America, focusing on their selection choices regarding: a) verification style (fact-checking x debunking); b) types of falsehoods, i.e. deception strategies; c) targets of falsehoods and sources when identifiable; d) scrutinized platforms and e) countries involved in the mis-disinformation. To analyze disinformation environments and fact-checker editorial choices, we manually coded 3,253 verification articles from 23 organizations (independents, linked to media and global news agencies) in Spain, Portugal, Germany, the UK, Brazil, Argentina, Chile and Venezuela. Countries in Europe were selected according to distinct types of media systems – liberal, Mediterranean and corporatist (Hallin & Mancini, 2004). Different levels of democracy and disinformation resilience drove the selection in Latin America (Humprecht et al., 2020). I draw a 25% stratified sample of all articles published in 2022 by each organization (n-4th). The links were collected using the Feeder Extension. Krippendorff’s coefficients were employed to measure coding reliability within the four language groups, and coders reached acceptable agreement levels after several hours of coding training. Despite global trends in verification strategies and procedures due to professionalization and institutionalization promoted by organizations such as the International Fact-Checking Network (IFCN) and collaborations with tech platforms such as Meta, we observe national specificities related to media and political traits.

Author: Regina Cazzamatta1

1Universität Erfurt

Room: B3.09 

Session 8: Dealing with Disinformation: Educational Approaches and Counterstrategies 

Chair: Marina Tulin, University of Amsterdam

Resilience is a popularly used but still underdeveloped concept in the emergent field of disinformation. Current definitions and studies mainly focus on intrapersonal factors and cognition to explain and explore resilience, leaving important questions regarding the role of daily practices and meso-social, economic, or cultural environments unanswered. Our cross-national, comparative study addresses this gap and asks: 1. Which tactics do German and Dutch young adults (18 – 32) apply for navigating disinformation in daily media practices and conversations? 2. How do economic, cultural, social, and personal resources shape this process and contribute to resilience to disinformation?

Applying a qualitative study design, we conducted 29 semi-structured interviews with young adults (18-32) in Germany (n=15) and the Netherlands (n=14) from November 2022 until April 2023. All interviews were transcribed, and data were analyzed using a grounded theory-based approach of line-by-line coding followed by axial coding. Based on our results, we developed a taxonomy of seven tactics for navigating disinformation. Our comparative analysis shows that the same tactics are applied in both countries, pointing to transnational mechanisms of how citizens deal with disinformation. The results indicate that young adults’ personal backgrounds and resources profoundly shape their approach to disinformation and (extent of) employed tactics. We also find patterns connecting socio-economic and cultural backgrounds to trust in political and journalistic institutions. Most concerningly, our data shows how structural inequalities resulting in fewer resources negatively impact the development of resilience to disinformation. Our study provides a structural and contextual perspective to the over-individualized debate on resilience to disinformation, which mainly focuses on literacies and places responsibility on the individual. Through connecting daily practices with resources and lived experiences, we provide more nuanced explanations to common questions in the field, such as why (media literacy) tactics do not necessarily lead to resilience.

Authors: Jülide Kont1, Çigdem Bozdag1, Wim Elving2, Marcel Broersma1

1University of Groningen

2Hanze University of Applied Sciences

This intervention-based approach tests how individuals may use critical thinking tools as safeguards from online misinformation and how to triage malign from benign information amidst the coronavirus infodemic. An international online experiment (N = 398) in 35 countries from 6 continents based on a TED Talk delivered by the US billionaire and Microsoft co-founder Bill Gates has revealed that several IF-THEN rules can predict a) veracity assessment and b) intention to share online news. The results show that the people exposed to specific IF-THEN rules rated fake news as substantially less veridic, while real news was rated substantially more veridic. Under the umbrella term the Fake News Model (FNM), the current endeavor is a challenge addressed to academia to design interventions to limit the pervasiveness of disinformation in the informational environment. The theoretical concept of the FNM was first developed at the University of Amsterdam during the Human(e) Artificial Intelligence course. The potential implications can spill over into social media activity, media systems, democracy, and capital markets as some IF-THEN rules can successfully be applied in investing or in important decision-making reasoning. This study represents only a tentative solution to a societal and scientific conundrum, not a bulletproof solution against misinformation.

Author: Florin Cepraga1

1University of Amsterdam

There is a growing ‘infodemically vulnerable’ population in Europe without the necessary media and information literacy (M&IL) skills to distinguish between reliable and scientifically grounded information, and unreliable and fake information. Against this background, many educators lack the digital and pedagogical competences needed to improve the M&IL skills of learners. Whilst fact-checking initiatives and internet regulation make an important contribution to reducing the spread of disinformation, we propose building information resilience through a cascade approach – supporting educators who work with vulnerable groups to acquire the M&IL competences needed to work more effectively in teaching and learning situations. This will in turn improve the M&IL competences of infodemically vulnerable people. We conducted a ‘state of the art’ review to identify examples of good practice in M&IL training and better understand what works. Additionally we carried out a life world analysis to document and understand the ‘lived experience’ of educators and learners across Europe (Italy, Portugal, Spain, Sweden, UK) to gain insight into their perspectives on the key issues and challenges around disinformation. Building on these findings, we produced a competence framework and pedagogic approach for an innovative online training programme, combining micro-training and interactive gaming, to intervene in disinformation at the educator level.

In this paper, we present preliminary findings from the pilot training programme with educators across Europe. The information war can’t be won with facts alone, it requires a holistic approach that empowers educators and learners to build the skills necessary to navigate the increasingly complex media landscape. We emphasise a shift away from the information deficit model to an adaptive behavioural change model, based on social contextualisation of knowledge. Our key objective is to develop an over-arching conceptual framework for increasing information resilience, which is adaptable to local, cultural and organisational contexts.

Authors: Joe Cullen1, Maria Ana Carneiro1, Francesca Di Concetto2, María José Hernández Serrano3, Diana Stark Ekman4, Clare Cullen1

1AGID

2Smart Bananas

3Universidad de Salamanca

4Hogskolan I Skövde (Skövde)

This research probes the current status of Media and Information Literacy (MIL) in Irish education, emphasising its critical role in equipping teachers to guide younger generations through challenges posed by misinformation and disinformation in the digital age. The study employed a mixed-methods approach, analysing course catalogues within pre-service teacher education and reviewing in-service professional development opportunities supplemented by insightful stakeholder interviews. The findings reveal a significant gap in the dedicated focus on MIL in teacher education programmes. Out of 70 modules reviewed, a scant eight explicitly centred on MIL, and only two institutes mandated MIL courses. Current in-service training initiatives, although expanding, engage a mere 25% of educators annually, suffering from a lack of standardisation and consistency in content delivery. Interview feedback converged on four pivotal themes: the fragmented state of MIL training, a prevailing theory-practice chasm, insufficient incentivisation for MIL integration, and a unanimous call for its formal, standardised inclusion in educational curricula. This disjointed approach results in an inconsistent application of MIL, underscoring the urgent need for a cohesive national strategy. The study underscores the necessity of a comprehensive overhaul in MIL education, which is vital in preparing teachers to effectively navigate and counteract the pervasive threat of misinformation and disinformation. This entails the mandatory embedding of robust MIL curricula in teacher education and consistent, up-to-date professional development. The recommendations advocate for a unified national framework, fostering collaborative educational partnerships and employing innovative models to ensure teachers are adept in nurturing media-literate future generations capable of critical thinking in a digitally saturated world. This overhaul is strategic and essential in safeguarding informed citizenship and democratic integrity amidst an infodemic of misinformation.

Authors: Lucia Mesquita1, Ricardo Castellini da Silva1

1Dublin City University

In this study we conduct the first ever online field experiment testing traditional and novel counter-misinformation strategies among fringe communities. While traditional strategies have been found to effectively counter misinformation, these have yet to be tested among fringe communities that regularly consume misinformation online. Furthermore they do not address the infrastructure of misinformation sources supporting this consumption. We therefore propose to test if both traditional debunking and the novel counter-misinformation strategy, source exposure, can lower consumption of misinformation media among fringe communities. Based on a snowball sampling of German fringe communities on Facebook, we identified public Facebook groups who regularly consumed the two most popular misinformation sources in Germany. In collaboration with the Fact-Checking organization VoxCheck, we conducted an online field experiment to test the effect of debunking and source exposure on consumption levels. We find that debunking misinformation claims does not reduce fringe communities’ consumption of misinformation sources, while exposing sources’ bad track record of spreading misinformation and biased reporting does reduce fringe communities consumption of misinformation sources. Furthermore we find that exposing gatekeepers among fringe communities lowers their acceptance of consuming misinformation sources. Our findings support a more active approach to counter misinformation by reaching out fringe groups and indicate that source-focused counter-misinformation strategies are effective in addressing the growing network of misinformation sources. Lastly, by showing the feasibility of independent online field experiments, our study opens up for more realistic testing of counter-misinformation strategies within an field dominated by clinical experiments. This experiment was given ethics approval by the EUI Ethics Board.

Authors: Christiern Santos Okholm1, Marijn ten Thij2, Amir Fard2

1European University Institute

2Maastricht University

10.15 – 10.30

Room: C10.20

Coffee Break [short]
10:30 – 11:45

Room: C10.20

Parallel Sessions 9 & 10

Session 9: Disinformation in an AI Age: AI-Powered Problems and Solutions 

Chair: Claes de Vreese, University of Amsterdam & EDMO

The increasing number and the speed of the spread of fake news is a growing challenge in the fight against disinformation. Experience shows that the number and importance of fake news is amplified during elections, pandemics, armed conflicts and after terrorist attacks. The disinformation that spreads through fake news is particularly dangerous in such crises. Understanding the patterns of fake news is one of the possible ways to fight against it, and advanced AI-based approaches can help to achieve this. We expect that once features of fake news can be identified using artificial intelligence, our research method can also be applied to other languages. Our research question does not focus on what percentage of the content of “misleading news sites” is content that can be classified as fake news using some fact-checking mechanism but on the content structure of online portals that are deemed unreliable by the fact-checking site(s), regardless of whether the texts they contain provide disinformation or misinformation. The aim of our work is to explore and quantify the thematic patterns and focus shifts of fake news by analysing the content of several major Hungarian fake news portals between 2019 and 2023. In our research, we investigate the changes in thematic patterns using LDA topic modelling and apply state-of-the-art large language models to analyse the different emotions generated by this fake news. Our results clearly show how these portals change their focus in response to the crisis and how fear and anger dominate the emotions they express. A common feature of fake news content seems to be that it links value-neutral content with a shocking title, lead, possibly with an explicitly negative conclusion, and negative additional content.

Authors: Orsolya Ring1, László Kiss1

1HUN-REN Centre for Social Sciences

The rise of disinformation on social media platforms has raised concerns about electoral integrity. Conducting free and fair elections now requires a new set of policies and cutting-edge monitoring technologies. Until recently, monitoring activities were primarily conducted by international and domestic organisations in traditional ways. However, these methods have now become insufficient in detecting computational propaganda and foreign influence via social media, as seen in the 2016 US elections and the 2016 UK Brexit referendum. Consequently, governments have turned to AI tools for monitoring social media to fight against disinformation. Several EU countries, including Germany, Sweden, France, Denmark, and the Czech Republic, have used AI tools for social media monitoring during elections (Schmuziger Goldzweig et al., 2019). Social media monitoring has been included in the EU Election Observation Missions’ duties, and the East StratCom Task Force has been focusing on activities undertaken by Russia to disrupt elections (European External Action Service, 2021). While previous studies explore how state and non-state actors disseminate disinformation via social media, research has yet to systematically investigate how governments have responded. This study addresses this unexplored area by analysing how governments monitor elections to tackle disinformation and examining the challenges and opportunities of using AI-powered tools for social media monitoring. A comprehensive analysis of election news, laws and social media regulations was conducted, and cross-national electoral integrity data from the V-Dem Dataset, focusing on European countries from 2016 to 2021, was analysed. The initial findings suggest that incorporating AI into election observation presents both opportunities and challenges. Transparent, fair and accountable AI tools can foster public trust and engagement in the electoral process. However, the ability of governments to access a large amount of social media data raises concerns about misuse for surveillance purposes, potentially infringing upon freedom of speech rather than safeguarding electoral integrity.

Author: Basak Bozkurt1

1University of Oxford

In recent years, AI-generated deepfakes have been coined as a particularly perilous form of visual disinformation. However, recent research suggests that most visual disinformation available in the public domain right now is produced in much simpler ways. The comparative deceptiveness of these less sophisticated formats, however, remains understudied. To fill this gap, we conducted a pre-registered online experiment (N = 802) investigating the effects different forms of visual disinformation portraying the same politician and conveying the same message. We exposed participants to a social media post including either (1) an authentic video of Ursula von der Leyen, (2) a decontextualization of it – including a misleading caption, thus misusing the video as evidence for a false claim, (3) a cheapfake – edited with simple techniques to strengthen the false claim, or (4) a deepfake – edited with the help of a deepfake app, thus explicitly stating a false claim. We test to what extent the discussed forms affected participants’ credibility perceptions and social media engagement intentions, considering issue-congruent attitudes as a moderating variable. In addition, we tested whether the videos led to misperceptions and how perceptions of von der Leyen’s integrity as a politician were affected. Surprisingly, our results reveal that despite a deepfake being viewed as less credible than any other form, it still resulted in a misperception and fostered negative perceptions of von der Leyen. Likewise, a ‘cheapfake’ was deemed lacking in credibility but still contributed to misperceptions. In contrast, a decontextualized video scarcely differed from an authentic video in these effects. Based on our findings, we conclude that visual disinformation (1) operates in distinct ways depending on the level of sophistication applied to its creation, (2) can contribute to misperceptions regardless of credibility perceptions, and (3) does not lead to an increased willingness to be spread online.

Authors: Teresa Elena Weikmann1, Jana Laura Egelhofer2, Sophie Lecheler1

1University of Vienna

2Ludwig-Maximilan-Universität München

The digital landscape has witnessed the instrumentalization of digital platforms for the dissemination of political advertising and, at times, disinformation. In response to this, regulatory bodies, notably the EU Commission, have undertaken efforts to strengthen the oversight of this field. Some digital platforms have chosen to implement bans on political and issue advertising, TikTok being one of them. This study investigates how thoroughly this ban is enforced by TikTok. Using publicly accessible data from TikTok’s Commercial Content Library, we conducted searches for political party names and the names of prominent politicians. Our analysis encompassed eight EU Member States (Austria, Belgium, Czech Republic, Germany, Hungary, Ireland, Netherlands, and Poland), enabling us to explore potential country-specific variations in enforcement.

Our data collection spanned a ten-month period, from October 2022 to August 2023, resulting in approximately 18,000 observations. In an initial phase, we employed OpenAI’s ChatGPT to automatically filter out false positives, reducing the dataset by approximately 25%. Subsequently, the remaining data underwent double-coding by national experts. As the coding is ongoing, no results can be reported yet. The findings of this study have implications for the ongoing digital policy discourse. Specifically, they shed light on the efficacy of enforcing the EU’s Code of Practice on Disinformation and the potential applicability of the forthcoming Regulation on Political Advertising currently under negotiation by EU institutions.

Authors: Stephan Mündges1, Tina Bettels-Schwabbauer1, Václav Moravec2, Michal Šenk2, Nico Hornig1, Kirsty Park3, Guy de Pauw4, Ferre Wouters5, Susanne Wegner1

1TU Dortmund University

2Charles University

3Dublin City University

4TEXTGAIN

5KU Leuven

A vibrant news media ecosystem is key to combating information influence and building trust in news journalism.  Current disinformation research highlights the versatile and ‘liquid’ character of constantly evolving disinformation tactics which is why unmasking them is extremely difficult. This complexity is exacerbated by the deceptive use of machine learning and generative AI for the creation and amplification of information influence. This poses challenges to information resilience and verification practices within the news media, especially during times of crisis and shrinking resources in newsrooms today. In our work we are investigating information influence and resilience in the context of Finnish news media. While the field of disinformation has been extensively studied internationally, our understanding of disinformation tactics used within the Finnish context is still highly fragmented. Thus, we examine how Finnish news media actors perceive the risk of AI-infused disinformation and their preparedness to counter it effectively. Furthermore, we consider what critical technologies and practices are needed by news media actors to mitigate liquid forms of disinformation. We will present emerging insights from a range of interviews and workshops conducted among Finnish news media actors. The interviews reveal that the perceptions towards disinformation are twofold; while professionals with special expertise related to dis-, and misinformation or fact-checking see the need for improved tools and practices for identifying and countering disinformation, many seasoned journalists and editors believe they have the capacity to critically recognize information influence, and see little need to adopt new approaches for fact-checking or detection of disinformation. By analyzing the roles, practices, and attitudes towards disinformation among news media actors, we aim to pinpoint the key socio-technical practices and tools that could be further leveraged in order to improve information resilience in the news media ecosystem.

Authors: Minttu Tikka1, Henna Paakki1, Nitin Sawhney1, Sanni Lares1

1Aalto University

Room: B3.09 

Session 10: The Case for Conspiracy: Unraveling the Role of Conspiratorial Beliefs in Disinformation 

Chair: David Blanco Herrero, University of Amsterdam

Greece has witnessed the proliferation of conspiracy theories across a broad spectrum of subjects. Many fact-checking articles are published on a monthly basis by organizations such as Ellinika Hoaxes, AFP and Fact Review. In recent years, the devastating forest fires that this country has experienced have given rise to various conspiracy theories concerning the causes of their outbreak. This paper aims to analyze the emergence and propagation of conspiracy theories surrounding forest fires in Greece, shedding light on the complex interplay between environmental crises, social media, and misinformation. It aims to understand the underlying causes and the potential social, political, and economic consequences of this phenomenon, including their impact on public trust, decision-making processes, and social cohesion. In this study, we will use a mixed-methods approach that combines qualitative content and sentiment analysis to investigate the online discourse surrounding forest fires in Greece. In detail, we will examine the social media posts made during the devastating fires in the Evros region in August 2023. Our research will analyse posts made on Facebook, TikTok, YouTube, X (former Twitter), MeWe and Telegram. It will also examine the role of the media, political discourse, and online communities in perpetuating and amplifying such narratives. By examining the case of Greece, this research aims to contribute to our broader understanding of the impact of conspiracy theories in environmental crises and their implications for society and governance at a European and global level. Additionally, to underscore the urgent need for proactive measures for addressing misinformation and conspiracy theories during forest fire events.

Authors: Spyridoula Markou1, Nikolaos Panagiotou1

1Aristotle University of Thessaloniki

Conspiracy theories permeate today’s (dis)information landscape, shaping and polarizing public perceptions on the COVID-pandemic, migration, the Russo-Ukrainian war, and more. Existing analyses often focus on the characteristics of conspiracy theories or how to debunk them. They tend to overlook how conspiracy theories theories are embedded within wider contexts of (geo)political warfare, technological infrastructures, and media ecosystems. Our research delves into the multifaceted role of conspiracy theories as drivers of polarization by exploring the ways in which these socio-political and technological factors interlock and reinforce each other. This interplay includes moments of deliberate polarization strategies by influential actors, the strategic use of cybertroopers and automated bots, but also the emergence of alternative media platforms that cater to specific audiences, and which enable a participatory role in the co-creation and dissemination of conspiratorial disinformation. Our case study focuses on the Netherlands and uses the Russia-Ukraine war as an example to illustrate the transnational, cross-medial, and participatory nature of polarizing conspiracy theories. Our case study allows us to make a two-pronged contribution to the existing scholarship. First, we present a conceptual framework that outlines the technologically mediated interaction between top-down and bottom-up engagement with conspiracy theories. We delineate the multitude of actors contributing to the dissemination of conspiracies through deliberate, negligent, or unwitting means. Furthermore, we describe the diverse motivations driving them and illustrate how their involvement with conspiracies is facilitated by the technological affordances of social media. Second, we provide a comprehensive mapping and digital visualization of the most prominent war-related themes and actors within the Dutch alternative media landscape, illustrating their interconnectedness. We demonstrate how the dissemination of foreign war-related conspiracy theories necessitate proactive involvement from actors within the Dutch alternative media realm to make them accessible, meaningful, and convincing to diverse audiences across various platforms with distinct characteristics.

Authors: Jaron Harambam1, Kris Ruijgrok1, Boris Noordenbos1

1University of Amsterdam

Conspiratorial discourses have been shown to refract established scientific knowledge, subsuming selected elements into over-arching, antagonistic narratives [4]. Empirical work furthermore suggest that this narrative convergence between science and conspiracy theories is facilitated by the interlinking affordances of social media [5, 6]. The present paper aims to contribute to our understanding of this relationship between social media, (scientific) knowledge, and narrative. To this end, it offers an empirical investigation of how public Telegram channels associated with right-wing extremist discourse, through their link-sharing features, mediate between established knowledge and conspiracy narratives. The paper thereby operationalizes the concept of “heterogeneous couplings” [2], and combines transferable approaches from information science with computational methods for text analysis in order to analyze 317M messages crawled from 28K Telegram channels included in the Pushshift Telegram dataset [1]. This analysis starts from a mapping of the latent intellectual structure of the channels as a network, with edges representing the bibliographic coupling frequency (BCF) between these channels [7], that is: the number of links to scientific sources they share, as identified on the basis of the OpenAlex knowledge graph [3]. Using community detection algorithms, we then proceed to identify clusters of chan- nels that refract similar kinds of knowledge sources. Finally, we correlate these intellectual communities with their narrative orientation, as inferred from a textual analysis of channel names, descriptions and message contents. Our preliminary analysis reveals a rich intellectual structure of the dataset, with communities either referring to specific linguistic communities , or communities bridging linguistic boundaries to refract shared knowledge in light of heterogeneous (narrative) themes pertaining to, among other things, science and technology, (right-wing) politics, literature and philosophy, and conspiracy theories (e.g. QAnon).

Authors: Tom Willaert1, Trisha Meyer1

1Vrije Universiteit Brussel

The war in Ukraine generated a swirl of political communication effects, among which fake news, disinformation, and populist appeals, in times of mounting public pressure over how state institutions and the European Union should react. Perceptions over the war were influenced by a growing number of alternative lines of knowledge circulation, often contradicting official statements and adding to the sentiment of lack of security. Thus, the unfolding of the conflict in Ukraine came together with hybrid threats such as disinformation campaigns based on new conspiracy theories, which have been weaponised as never before, and employed to justify Russia’s invasion. There is still little scientific evidence related to how conspiracy theories were received in countries neighbouring the conflict, which have been affected by the hybrid war. Based on a unique set of data (N = 8,743) collected in Romania, Hungary and Poland since the start of the war using a survey under the form of an online political compass, the current paper studies the degree of vulnerability to conspiracy theories in these three countries neighbouring Ukraine. We thus develop a nuanced understanding over what drives people towards the new conspiracy theories in these three countries, all of which are relevant case studies for a comparative approach given the differences in terms of political discourse and party competition, as well as the different public attitudes in relation to the European Union and NATO. The ambition that underpins the entire approach of the paper is that of bringing a contribution to identifying the profile of people with a high receptivity to conspiracy theories and the political effects of such attitudes during unprecedented times for the European Union and the world since World War II.

Authors: Mihnea Simion Stoica1, Susana Dragomir1, Ioan Hosu1

1Babeș-Bolyai University

11:45 – 12:00

Room: C10.20

Coffee Break [short]
12:00 – 12:45

Room: C10.20

Closing Keynote

Chair: Claes de Vreese, University of Amsterdam & EDMO

Annika Sehl | Professor at Catholic University of Eichstätt-Ingolstadt 

12:45 – 13:30

Room: C10.20

Lunch Buffet & Conference Close
13:30 – 14:30

Room: B3.09 

Match-Making Session 2

Chair: Jessica Gabriele Walter, Aarhus University & EDMO 

Keynote Speakers

We are honoured to announce that the conference will feature distinguished keynote speakers who are renowned experts in the field of disinformation.

Annika Sehl
Prof. Annika Sehl
Sander van der Linden
Prof. Sander van der Linden

Professor Annika Sehl holds the Chair of Journalism with a Focus on Media Structures and Society at the Catholic University of Eichstätt-Ingolstadt and is a Research Associate at the Reuters Institute for the Study of Journalism (RISJ) at the University of Oxford.
 

Her research focuses on how digitalisation affects media organisations (especially public service media), journalistic production and the use of journalistic content, and society. She often chooses an international comparative perspective. Her research is published in books, book chapters and in academic journals such as Digital Journalism, European Journal of Communication, International Communication Gazette, Journalism, Journalism Studies, Mass Communication and Society, or Media and Communication.

In addition to her academic experience, Annika Sehl also has practical knowledge and skills in the field of journalism. She trained with the news broadcaster N24 in Berlin, Hamburg and Munich, and worked as a freelance journalist for newspapers in Germany.

For more information about Professor Sehl’s background and accolades, you can read more  here, and learn more about her research from her website at her current university. 

Professor Sander van der Linden is Professor of Social Psychology in Society in the Department of Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. Before coming to Cambridge, he held posts at Princeton and Yale University.

His research interests center around the psychology of human judgment and decision-making. In particular, he is interested in the social influence and persuasion process and how people are influenced by (mis)information and gain resistance to persuasion through psychological inoculation. He is also interested in the study of fake news, media effects, social networks, and belief systems (e.g., conspiracy theories), as well as the emergence of social norms, polarization, reasoning about evidence, and public understanding of risk and uncertainty.
 

He has won numerous awards for his research on human judgment, communication, and decision-making, including the Rising Star Award from the Association for Psychological Science (APS), the Sage Early Career Award from the Society for Personality and Social Psychology (SPSP), the Frank Prize in Public Interest Research from the University of Florida, and the Sir James Cameron Medal for the Public Understanding of Risk from the Royal College of Physicians. His research papers have received awards from organizations such as the American Psychological Association (APA), the International Association of Applied Psychology (IAAP) and the Society for the Psychological Study of Social Issues (SPSSI).

For more information about Professor van der Linden’s background and accolades, as well as his publications, you can read more here. 

Match-Making Sessions

Unlock Innovation TogetherJoin our Match-Making Sessions for Collaborative Research across EDMO!

On both conference days, participants in Amsterdam will have the unique opportunity to further explore the potential for collaboration across EDMO by attending match-making sessions. The match-making sessions will be tailored to various research interests of the participants – for example generative AI or elections/politics. 

Together with you we aim at highlighting common research interests, at defining common ground on next steps in research around disinformation and other information disorders and at discussing the potential of new research approaches. The aim is to get research collaborations started that will benefit not only your research projects but the combat of disinformation at EU level. 

We strongly encourage you to take your chance in exploring the diverse and interdisciplinary EDMO research community and are looking forward to discussions and exchange beyond the usual conference-life. 

Contact

For any inquiries regarding the EDMO 2024 Scientific Conference, please do not hesitate to get in touch with the organizers:  

Aqsa Farooq
a.farooq[@]uva.nl 

EDMO Team 

Edmo[@]eui.eu