News
10 Recommendations by the Taskforce
10 Recommendations by the Taskforce on Disinformation and the War in Ukraine

Introduction

Overview

Recommendations

  1. Establish a permanent body, independent of governments and platforms, with an EU-wide network of centers with a focus on preparing for and responding to ongoing and emergency information challenges
  2. Build a networked infrastructure for educating people about disinformation and media literacy
  3. Require platforms to share data around different types of content, by signing on to the Code of Conduct around Platform-to-Researcher data access
  4. Ensure that technology companies enforce their policies in terms of prohibiting ad funded disinformation, utilizing the expertise of independent and neutral third parties and the appointment of an independent auditor
  5. Construct media monitoring systems sophisticated enough to capture the flows of disinformation across the whole of the information environment in multiple languages
  6. Enforce policies to protect journalists and strengthen media freedom
  7. Implement policies for transparent decision making around content removal to ensure there are agreed-upon standards and processes for archiving content and data so it can be used and understood by prosecutors, policymakers, journalists, fact-checkers, researchers and historians
  8. Put in place funding and support for the mental health and well-being of researchers, fact-checkers and journalists working on war reporting and disinformation investigations
  9. Develop new ethical frameworks for civil society and government initiatives working to fight disinformation
  10. Build an EU-wide pipeline of researchers, university centers, journalists, fact-checkers and other civil society groups with the necessary technical, linguistic and subject-matter knowledge to respond quickly to future information challenges

Conclusions

Reading List

 

Introduction

On March 3, the European Digital Media Observatory (EDMO) established a Taskforce on Disinformation and the War in Ukraine. Chaired by Dr. Claire Wardle, the taskforce includes 18 members representing academia, journalism, media and civil society, from 10 countries across the EU, acting in their personal capacity. The taskforce has met weekly to discuss developments and trends in relation to disinformation in the context of Ukraine and to design and steer different projects.

The focus was on disinformation, defined as false claims and content designed to cause harm or for financial gain. The group included within their definition content hosted by technology companies, online, print and broadcast reporting, comments by spokespeople, as well as false information circulating in offline spaces. In addition, the taskforce considered misleading advertising, reporting errors, satire and parody, and clearly identifiable partisan news and commentary when the content was causing harm.

Considering the mission of EDMO, the work of the taskforce did not focus on the security or foreign interference aspects of disinformation related to the war,  but rather on understanding the phenomenon by focusing on the content circulating, examining the role of public interest journalism, and researching efforts to build resilience across societies.

This short report lists 10 recommendations for policy makers, technology companies, as well as newsrooms and civil society, based on observations, research activities and discussions over the past three months. Where available, the recommendations are supported by evidence but considering the short time frame since the invasion, there is unfortunately limited research that we could pull from.  Therefore, much of what follows is based on the expertise of almost twenty people who have worked on the frontlines of analyzing and countering disinformation for many years.

Four members of the taskforce were also members of the EU Commission’s High Level Expert Group on Fake News and Disinformation in the spring of 2018 and are able to compare discussions from that Group as well as subsequent action and inaction.

Overview

Russia’s war in Ukraine has exposed a variety of shifting and evolving disinformation tactics, while at the same time indicating that the stable goals of disinformation, irrespective of these tactics and irrespective of the subject matter, are to sow division, create confusion, alter the terms of public conversations in liberal democracies and ultimately hijack them altogether.

This war has underscored what we already know from numerous elections and the COVID-19 pandemic: disinformation flows are global and cross-platform, but responses are national or regional and focus predominantly on dominant languages, and the largest, most established technology companies. Disinformation is not only a platform problem. Rumors and falsehoods are amplified by the mainstream media often because of a tendency to provide coverage of high-profile figures, such as politicians and spokespeople, irrespective of their credibility. Disinformation spreads across various media: social, mainstream news, as well as cultural and entertainment. The general public often lacks the media literacy skills to detect disinformation, and at times of crisis it appears that many are not reached by the efforts of media literacy educators. We want to emphasize that social media algorithms amplifying and speeding up the spread of content is a significant problem. However, we also want to emphasize that while sometimes that content is created by online disinformation agents, that content can also be created by news media outlets, too often featuring domestic political actors. This phenomenon is happening particularly in countries bordering Russia, but not exclusively.

There has certainly been a great deal of discussion in the news media about disinformation and the war, with many stories focused on the false or misleading content that has been circulating. Technology companies have also taken action, by flagging, removing or algorithmically throttling content that breaks their policies (some of which have been created to respond to disinformation related to the war). However, there has been a disproportionate focus on specific examples, such as the resurfacing of ‘old’ out-of-context material that can be understood as demonstrably false, rather than a multi-faceted response that considers the variety of tools and methods available to those seeking to cause harm. Additionally, the focus on dominant languages (by platforms, policymakers and even civil society itself) in select member states leaves large swathes of the EU populace unsupported by technology platforms in their efforts to combat disinformation domestically and regionally; policies and enforcement that consider the whole information ecosystem in all member states and languages are critical if malign actors are to be effectively deterred.  Overall, there have been very few attempts by those working on the problem of disinformation related to the war in Ukraine to identify defining narratives and to respond to these narratives; rather the approach taken looks more like playing ‘whack-a-mole’ with individual ‘items’ of content.

We would like to stress that while the regional response to disinformation created by the EU frameworks is better than individual nation state responses, the global, cross-platform and offline flows of disinformation mean the problem does not stop at the borders of the European Union, nor on the most established platforms. It is tempting for policymakers in western Europe and the US to talk about ‘winning’ the disinformation war in relation to Ukraine. This does not acknowledge that citizens in their countries are not the primary target, and without a truly global, cross platform response, disinformation narratives will continue to circulate and make their way back to the EU.

Fundamentally, this war has shown what many of us have known for a long time. We will never be able to ‘solve’ the problem of disinformation. Rather than seeking quick fix ‘technical’ options, there must be serious investment placed on reforming and strengthening media and journalism, media literacy (including digital, critical, news and information literacies), research activities and building resilience globally.

While the war has underscored many of the known characteristics of disinformation and information operations, the platforms, as well as governing bodies and civil society, were simply not prepared. The response has been haphazard and full of duplication, with disproportionate emphasis on monitoring content spreading in western European countries. There have also been notable absences, such as synergies between social and mainstream media monitoring, and systematic media literacy initiatives. And perhaps most troubling, when there is such an emphasis on transparency from the platforms, there have been government responses that have failed to provide the same levels of transparency, for example, around the decisions to take down RT and Sputnik, as well as a lack of transparency around different counter-disinformation initiatives and the funding for these.

Based on the work and discussions over the past months, and to further support the fight against disinformation and counter respective challenges, the Taskforce makes the following recommendations.

Recommendations

1. Establish a permanent body, independent of governments and platforms, with an EU-wide network of centers with a focus on preparing for and responding to ongoing and emergency information challenges

The challenges exposed by the war in Ukraine have emphasized what many experts and reports have been calling for over the past few years: the need for a new, well-funded, resourced and networked institution that can start to more holistically tackle the problems associated with information that is causing harm. This is a moment in history, as we saw post World War II, when there is an urgent need for an institution designed to respond adequately to the contemporary information environment. This body needs to be global (not necessarily led by the US), with regional, national and local centers which can undertake the data analysis necessary to provide the independent scrutiny required by new policy frameworks such as the Digital Services Act, as well as the Code of Practice on Disinformation. It is critical that the body is independent of governments and platforms, so financial support would need to come from a centralized fund, managed independently, drawing funding from different sources, including philanthropy as well as potentially from national governments, EU institutions and levies on technology company profits.

We have identified some major issues with the disinformation response, all of which highlight the need for a body that can supercharge efforts and effectively connect the dots. These issues include:

  • Serious duplication of effort with a number of different initiatives appearing over the past few months. Many of them are focused on monitoring disinformation, but with no shared databases, consistent typologies or archiving policies, the disconnected nature of these initiatives is frustrating.
  • Similarly, many of these initiatives are not clear about their mandates, funding or what they intend to do with the data that is being gathered. The lack of transparency is not acceptable when the same is demanded of technology companies.
  • Little to no translation capabilities for tracking cross-border disinformation flows or pretrained open-source computational models for small languages.
  • Few initiatives that included expertise from non-English or non-western European entities and experts, meaning the very different experience of those living in bordering countries has been overlooked too frequently.
  • Almost no consistent monitoring of disinformation circulating on broadcast or print media, as well as rumors being repeated in ‘offline’ settings. Without that, it is very difficult to track when and how narratives are breaking through and gaining ground.
  • No ability across the EU to independently oversee the actions taken by the technology companies, for example, the impact of the additional labels on state-sponsored accounts.
  • No ability to encourage the platforms to harmonise their terms of service and moderation policies and enforcement measures to incorporate public policy considerations.
  • While EDMO is working on many aspects covered above, work needs to be done to reach a coordinated and sustainable effort of sufficient scale and reach to share curricula and other initiatives related to media literacy regarding disinformation.

We propose the establishment of a permanent body, independent of governments or platforms, with significant multi-year funding to respond appropriately to the size of the threats, providing or overseeing third-party scrutiny mechanisms for any platform interventions and actions, focusing on the information environment as a whole, including the role and impact of the (social) media, politicians, official spokespeople and community leaders spreading false and misleading information. This body would prioritize the need for ongoing, comprehensive resiliency building programs.

This body should also create a crisis management unit, composed among others of human rights lawyers, ethicists, academics, fact-checkers, media literacy experts, media practitioners, media councils and other self-regulatory bodies, and, depending on the crisis, appointed sector specific experts.

It would not replace EDMO, but would supercharge efforts and play a critical role in creating the infrastructure to ensure this work has the most impact.

2. Build a networked infrastructure for educating people about disinformation and media literacy

The Taskforce undertook a mapping of media literacy responses from a wide range of organizations across Europe. The results suggest a highly uneven and partial response from media literacy (including digital, critical, news and information literacies) initiatives. Arguably, media literacy organizations were simply not ready, or not resourced, to mount a scalable and robust response to the war in Ukraine. Furthermore, it is our impression, although further research would be needed to evidence it, that the initiatives that were mounted did not reach either the majority of the general public – for most initiatives sit on an organizational website rather than reaching out into people’s everyday lives – nor did they reach the more susceptible or at-risk groups likely to be disproportionately adversely affected by the spread of disinformation.

With valuable exceptions, we conclude that the European disinformation and media literacy community has been insufficiently prepared for something rapid and problematic such as the war in Ukraine – even though the sudden explosion of disinformation over COVID-19 vaccines that preceded this should have alerted us already. We recommend that key media literacy actors should have a response ready for any future crisis, as well as sustaining their activities to improve the European population’s baseline media literacy.

Priorities for a networked infrastructure, learning from the experience of media literacy initiatives relating to the war in Ukraine include a need:

  • to target distinct audiences – opinion leaders, politically active users, the general public, children and young people, older people, disadvantaged and marginalized groups
  • to engage valuable key actors including but going beyond educators – libraries, public service media, universities, journalists, press and media councils, specialist media literacy organizations
  • to recognize and respond to the concerns of those most directly affected by mis/disinformation (e.g. in bordering countries, refugees)
  • to ensure that each organization is supported by the activities of others, can learn from the good practice (and mistakes) of others and that outcomes are independently evaluated

Respective network(s) must:

  • be adequately funded to ensure outreach, otherwise resources are created but not widely shared, which risks adding to existing information inequalities
  • work both to anticipate future crises and to respond quickly as new forms of disinformation arise; most optimal will likely be to sustain a corpus of key messages and effective mechanisms for awareness raising and education, and then to adapt these as needed to suit specific circumstances

3. Require platforms to share data around different types of content, by signing on to the Code of Conduct around Platform-to-Researcher data access

Currently, there is some access to some data from some of the larger platforms, but most data is impossible to access. There is an urgent need for extended access to data for independent research, in order to monitor and understand the spread of disinformation. On March 16  2022, this Taskforce sent a request for data related to the war in to: Alphabet, Meta, Pinterest, Snapchat, Telegram, TikTok, Twitter. None of the companies contacted provided the data.

As outlined in the Report of the European Digital Media Observatory’s Working Group on Platform-to-Researcher Data Access published on May 31 2022, a draft code of conduct demonstrates how platform data can be shared safely and ethically with independent researchers. Without this type of access, the scope and scale of the problem remain impossible to assess, and we have no benchmarks by which to measure the effectiveness of any interventions.

In addition, the transparency reports provided by the main technology companies are currently lacking sufficient detail about moderated content, mis- and disinformation removal and demotion actions, and detection and actions taken against problematic users and behavior. A key shortcoming is the uneven level of transparency reporting across EU countries and languages, with counter-disinformation measures in “smaller” markets and countries often lagging behind. Another key issue is the under-funding and under-staffing of independent fact-checking organizations, especially in smaller countries, upon which companies are primarily relying.

Thus, the volume and breadth of their debunks is severely limited, which in turn gives the false impression in transparency reports that platforms are limiting exposure to disinformation far more effectively than they actually are. Lastly, transparency reports currently fail to address the cross-platform nature and spread of problematic content.

To help overcome these, we advocate for the creation of a multi-platform, real-time transparency dashboard. In order to facilitate cross-platform transparency, each technology company needs to provide daily data on a per EU country and per language basis, in a standardized, structured format (e.g. defined by the independent body outlined in recommendation 1),  about the most engaging and propagated content, under the following separate categories: moderated problematic content, mainstream media articles, other unmoderated over-performing content from web sites, and other unmoderated over-performing content from other social media sites. In addition, academic independent researchers need more detailed graph data in order to detect the flow of mis- and disinformation, along with creating a deeper understanding of coordinated influence operations.

The independent body would host this common, cross-platform database in a secure, privacy preserving manner. It should also host a real-time dashboard that provides researchers, fact-checkers and policymakers not only with aggregated statistics such as the number of posts removed by a given platform on a given date in a particular EU country, but also enable them to drill deeper to investigate specific countries, topics, and URLs, as well as to compare the reach and engagement of problematic vs trustworthy content.

4. Ensure that technology companies enforce their policies in terms of prohibiting ad funded disinformation, utilizing the expertise of independent and neutral third parties and the appointment of an independent auditor

Ad tech publisher and advertising policies on disinformation are often not comprehensive nor appropriately actioned. Most ad tech policies that attempt to define harmful content as a set of categories fail to capture the full scope of disinformation and the manner in which it can evolve. For those with publisher policies, the policies are ineffective and/or inconsistently enforced.

This has been exemplified by the ongoing disinformation related to the war in Ukraine. Where ad tech companies have publisher policies on the conflict, monetization of harmful disinformation has frequently been observed.  While some ad tech companies have taken action in relation to the war in Ukraine, the scope of their responses has been exceedingly narrow.  Ad tech companies that defund by incident (article level) rather than systematically (at a site level) continue to profit from disinformation that causes harm.

A stronger regulatory regime that targets the monetization of disinformation and commits to disrupting the financial incentives for creating harmful content is needed to protect citizens and brands. Engaging content is amplified by algorithms as it enables the maximum number of eyeballs to be monetized by ads. Disinformation is often the most engaging content by design.

Additionally, ad tech companies must use quality assessments of news sites that rate disinformation risk and which could be used for a) quality signals in ranking and recommender algorithms, b) monetization decisions, c) media pluralism assessments to capture the full scope of disinformation and the manner in which it can evolve, and d) reducing the incentive to create disinformation by breaking the automatic link between engaging but disinformation content and the ads placed next to it.

As part of the implementation of the Digital Services Act, online platforms will be required to conduct risk assessments of intentional manipulation of their service which will rate the risk of disinformation through new data access and scrutiny mechanisms. Very Large Online Platforms (VLOPs) will have to mitigate these risks appropriately and successfully, or receive sanctions. The DSA could consider instructing platforms to use third party assessments of the disinformation risk of their activities and customers (i.e. sites). This would limit the spread of disinformation and ensure that the determination of what is and is not disinformation is not solely in the hands of tech companies whose business models rely on engaging content. Such risk assessments could be used to downrank and limit the influence of disinformation, preserving the right to freedom of online speech while curtailing the reach and monetization of harmful content. The DSA must ensure that the Code of Practice on Disinformation guarantees that advertising and ad publishing policies are aligned and strictly enforced. The future delegated acts (non-legislative acts that supplement the legislation) will be crucial in determining how the DSA is implemented.

5. Construct media monitoring systems sophisticated enough to capture the flows of disinformation across the whole of the information environment in multiple languages

While conversations about disinformation tend to center themselves on technology companies, as we have seen for many decades, the media play a critical role in the disinformation ecosystem. There are a number of reasons for this, including ‘media capture’ (when vested interests use media outlets to push certain positions or viewpoints), disinformation agents targeting newsrooms with source hacking techniques (specific techniques designed to ‘dupe’ journalists and spokespeople) or when newsrooms amplify misleading content, particularly from elected officials or spokespeople.

While there are a number of groups monitoring online disinformation (using ad hoc approaches, dashboards often built on scraped data (see recommendation 2) there has been very little work done to monitor broadcast (radio and TV) and print media output in real time. While there are plenty of ethics codes, and general oversight provided by press and media councils, understanding the ways in which disinformation narratives are being promoted in real-time is not happening.

This failure to capture the spread of disinformation through ‘traditional media’ is particularly concerning for countries located on Russia’s borders. As taskforce members from these countries stressed, pro-Putin narratives about the war have been ‘mainstreamed’ by many broadcast and print media in these bordering countries, and the problem is far more serious than those who believe disinformation only lives on Russian state-controlled channels RT and Sputnik.

Taskforce members also highlighted the ways in which Russian-funded outlets in their countries frequently promoted Western fringe thinkers and conspiracy theorists, often using clips from western news outlets. They argued that this tactic has been particularly successful in EU countries with less robust media systems, with a shorter track record of EU membership, where the “Western” appeal, including the appeal to the credibility of “Western” news outlets, are very persuasive devices.

There is no ‘Crowdtangle’ for the news media. There are some attempts to rectify this, for example, the San Francisco based Internet Archive recently released a ‘Belarusian, Russian and Ukrainian TV News Archive’ that uses Optical Character Recognition (OCR) technology to allow researchers to query keywords in closed captions. While powerful, this is designed as a way to capture the output for researchers and historians, rather than a real-time tool to help disinformation analysts to monitor real-time trends.

The lack of technology and monitoring infrastructure for the news media is causing a serious gap in knowledge and has impacted the ways in which disinformation is discussed in the context of the war. Understanding how false claims and narratives flow between and across social media, traditional media and offline spaces is critical. Without this level of understanding, we will continue to fail in our responses to disinformation.

We therefore recommend serious investment in building a news monitoring infrastructure, both the technology to capture and search broadcast and print sources, as well as the training of analysts. This monitoring unit should be housed in the body outlined in the first recommendation listed in this report, so that the analysis of media output can be integrated with the capture of disinformation narratives emerging online.

6. Enforce policies to protect journalists and strengthen media freedom

According to the Council of Europe Platform for the Protection of Journalism and Safety of Journalists, as of 1 June, 12 journalists and media workers have been killed in the field in Ukraine (see here). It is critical that sufficient funding is made available to ensure that all journalists and fixers working in war zones have the necessary equipment and utmost protection. Media organizations have a duty of care when sending their staff, including freelancers, into war zones.

Journalists covering conflicts are afforded protection under international humanitarian law. The Geneva Conventions of 1949 and their Additional Protocols set out rules to protect people who are not taking part in the fighting and those who can no longer fight. Additional Protocol I specifies that journalists who are engaged in professional missions in areas of armed conflict must be considered civilians and must be protected as such, so long as they take no action adversely affecting their status as civilians. This means that all parties to a conflict must protect journalists, avoid deliberate attacks against them and uphold their rights in case they are captured. In addition, the Rome Statute of the International Criminal Court establishes that intentionally directing attacks against civilians, and therefore also against journalists who are not engaged in the hostilities, constitutes a war crime. These protections must be upheld.

7. Implement policies for transparent decision making around content removal to ensure there are agreed-upon standards and processes for archiving content and data so it can be used and understood by prosecutors, policymakers, journalists, fact-checkers, researchers and historians

There are obvious tensions involved in responding to disinformation. Removing content that is causing harm often feels instinctively like the right choice, but any decisions need to be balanced with a full analysis of any unintended consequences, whether that is preventing journalists or fact-checkers from reporting on disinformation (as we’re seeing in the Netherlands where journalists can’t access or report on RT because of bans), slowing down lawyers investigating potential war crimes, or stopping historians from fully understanding contemporary events.

Archiving and preserving digital content around war and conflict is critical. Despite many calls by human rights organizations for the creation of systematic archiving policies and technologies as content was lost around the Syrian conflict, little was done, and the impact of the failure to act has been demonstrated in the context of the war in Ukraine. Certainly, civil society organisations and individuals are using varying strategies and approaches to collect material when they see it and find it worthy of storing. The primary tool is archiving browser extensions such as perma.cc, however, many individuals can’t afford to pay for these licenses or don’t know that it is best practice and work online in an environment where content is being removed by the uploaders or the platforms.

From a technology perspective, there should be a coordinated framework, involving both policies as well as technological infrastructure, for archiving content, which should be facilitated and supported by as many technology companies as possible. Platforms need to make sure all digital content dealing with war and conflict uploaded to their platforms is preserved and made available to respective stakeholders following clearly defined access criteria. This also applies to content that is removed because it has violated a platform’s terms and conditions. Furthermore, standards for (machine-readable) annotation and archiving should be agreed upon and followed.

In terms of decisions made by state actors, the Taskforce recommends that all decisions that impact free expression and media freedom are held to the standards set out of international human rights law, including being prescribed by law, necessary, and proportional, and underlines that consistency, transparency, and due process is urgently needed when it comes to state action even more urgently than private action, including in decisions to censor specific outlets or voices.

8. Put in place funding and support for the mental health and well-being of researchers, fact-checkers and journalists working on war reporting and disinformation investigations

Journalists covering war or conflict in person ”from the field” should clearly be prepared before they are sent on location. An increasingly serious issue, but one that receives less attention, is the mental impact of dealing with potentially traumatizing material online and far away from war zones, such as graphic digital imagery or other potentially disturbing digital material. In addition, journalists, fact-checkers and disinformation researchers, particularly women and people of color, receive unacceptable levels of online harassment and abuse.

Journalists and investigators are often not adequately prepared for these kinds of activities, not fully aware of potential consequences, not always equipped with appropriate handling tactics nor aware of respective coping mechanisms.

In order to protect journalists and investigators who cover war, conflict and disinformation, we recommend:

  • providing adequate backup and support for all journalists who cover war and conflict, regardless of employment status, providing them with a basic safety net they can use should they suffer serious consequences as a result of reporting on location or remotely;
  • providing adequate training and preparation for journalists and investigators carrying out investigations online;
  • raising awareness about secondary / vicarious trauma and how to avoid it;
  • making avoidance techniques and coping mechanisms part of any teaching and training activities (e.g. in journalism schools etc.);
  • raising awareness among the journalism community as well as management to take PTSD (post-traumatic stress disorder) and other psychological matters seriously;
  • continuously working on ending stigmatization when it comes to admitting to / suffering from psychological illness;
  • fostering mental well-being of journalists and investigators whenever possible, by whatever means;
  • ensuring technology companies invest seriously in interventions to respond to online harassment and abuse.
  • making available adequate resources to support the above goals by supporting relevant initiatives and activities.

9. Develop new ethical frameworks for civil society and government initiatives working to fight disinformation

Ethical codes and guidelines are in place for journalists and newsrooms, but in the field of disinformation studies, the only ethical codes are fact-checking codes, notably the International Fact-Checking Network Code of Principles and the soon to be launched European Fact-Checking code. The absence of more holistic ethics codes has, we believe, resulted in some troubling aspects of the response to disinformation related to the war in Ukraine, for example:

  • No transparency around different initiatives working on these issues, with coalitions or collaborations being built with no transparency around membership.
  • No transparency around funders.
  • No transparency about the AI models and algorithms being used to detect disinformation.
  • No transparency about who has access to data and how it is going to be shared and archived.

When we call on the platforms for increased transparency, there has to be an awareness that these principles should apply to all those working in the disinformation space.

10. Build an EU-wide pipeline of researchers, university centers, journalists, fact-checkers and other civil society groups with the necessary technical, linguistic and subject-matter knowledge to respond quickly to future information challenges

Even though there is more awareness of disinformation, the skill level in Europe is low and differs between member states. For example, few universities have the capability to ingest the amount of data that is necessary to analyze platform data at scale. Similarly, work is being published by relatively small non-profits reliant on philanthropic funding. While EDMO and the network have been doing good work, it is very clear that significantly more resources need to be invested into universities and research centers across the EU in order to do the real-time research and archiving necessary during crises such as this one.

This is particularly the case when considering data storage and analysis capacities. We have identified a strong need for special data access APIs, enabling independent research studies to validate platform content and user moderation efforts and provide independent evaluations of the impact of platform measures. Twitter should be commended for their Ukraine data endpoint provision for researchers, but we would argue that even they need to go further and provide researchers with raised API quotas that match the large scale and high velocity of war-related content, as well as provide access to country labeled tweets in order to more easily and more reliably carry out EU country-specific and comparative analysis at scale.

However, there are currently very few universities or research centers across the EU with the capability to make effective use of these types of APIs. Currently, platforms ‘cherry-pick’ which universities they want to work with, leading to disparities and opaque relationships between platforms and researchers. To prevent this, there need to be more universities and research institutes with the ability to store and work with very large datasets for computational research and analysis. Addressing this will require a significant commitment of funding.

We recommend that the body outlined in recommendation 1 should:

  1. act as a broker between platforms, fact-checkers and researchers to enable expedited access to data;
  2. provide vetted researchers from the respective countries with the necessary data storage and processing infrastructure, as well as provide EU-wide training and capacity building for researchers and fact-checkers, so they can acquire the necessary data analysis expertise;
  3. provide EU-wide native language NLP models;
  4. foster cross-country collaboration between researchers, so cross-lingual and cross-cultural studies become the norm, rather than the exception.

Conclusions

Finally, the Taskforce acknowledges the key role of EDMO as a platform that has an important convening power, guaranteeing a multi-stakeholder approach. As these recommendations demonstrated, these issues require multidisciplinary expertise, including different academic disciplines along with different industry experience, such as journalism, fact-checking, education, technology, law, policy and fundamental rights.

One of the most important lessons from the past months has been that the work needs to be done by a range of institutions that are also critical pillars of the information ecosystem, beyond simply requiring the platforms to take more action.

Reading List

1) Establish a permanent body, independent of governments and platforms, with an EU-wide network of centers with a focus on preparing for and responding to ongoing and emergency information challenges

Chlebna, Camilla, and James Simmie. 2018. ‘New Technological Path Creation and the Role of Institutions in Different Geo-Political Spaces’. European Planning Studies 26(5): 969–87.

Khanna, Tarun. ‘When Technology Gets Ahead of Society’. Harvard Business Review July-August 2018: 88–95.

2) Build a networked infrastructure for educating people about disinformation and media literacy

Cortesi, Sandra et al. 2020. ‘Youth and Digital Citizenship+ (Plus): Understanding Skills for a Digital World’. Berkman Klein Center for Internet & Society. https://dash.harvard.edu/handle/1/42638976 (June 15, 2022).

Edwards, Lee et al. 2021. Rapid Evidence Assessment on Online Misinformation and Media Literacy: Final Report ForOfcom. Ofcom. http://eprints.lse.ac.uk/110866/2/rapid_assessment_on_online_misinformation_and_media_literacy_puvblished.pdf (June 15, 2022).

Howard, Philip et al. 2021. Digital Misinformation / Disinformation and Children. UNICEF’s Office of Global Insight and Policy. Rapid Analysis. https://www.unicef.org/globalinsight/media/2096/file/UNICEF-Global-Insight-Digital-Mis-Disinformation-and-Children-2021.pdf (June 15, 2022).

UNESCO. 2022. ‘Why Mother Language-Based Education Is Essential’. https://www.unesco.org/en/articles/why-mother-language-based-education-essential (June 15, 2022).

3) Require platforms to share data around different types of content, by signing on to the Code of Conduct around Platform-to-Researcher data access

Allen, Jeff. 2022. The Integrity Institute’s Analysis of Facebook’s Widely Viewed Content Report. https://integrityinstitute.org/widely-viewed-content-analysis-tracking-dashboard (June 15, 2022).

Bontcheva, Kalina et al. 2020. Balancing Act: Countering Digital Disinformation While Respecting Freedom of Expression. Broadband Commission Research Report on ‘Freedom of Expression and Addressing Disinformation on the Internet’. International Telecommunication Union (ITU). https://www.broadbandcommission.org/Documents/working-groups/FoE_Disinfo_Report.pdf (June 15, 2022).

Deloitte. 2020. ‘Develop Real-Time Sensing / Red Flag Reporting Dashboard’. https://www2.deloitte.com/global/en/pages/about-deloitte/articles/develop-real-time-sensing-red-flag-reporting-dashboard.html (June 15, 2022).

European Parliament. Directorate General for Parliamentary Research Services. 2019. Automated Tackling of Disinformation: Major Challenges Ahead. LU: Publications Office. https://data.europa.eu/doi/10.2861/368879 (June 20, 2022).

Pasquetto, Irene, Briony Swire-Thompson, and Michelle A. Amazeen. 2020. ‘Tackling Misinformation: What Researchers Could Do with Social Media Data’. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/tackling-misinformation-what-researchers-could-do-with-social-media-data/ (June 17, 2022).

4) Ensure that technology companies enforce their policies in terms of prohibiting ad funded disinformation, utilizing the expertise of independent and neutral third parties and the appointment of an independent auditor

Bayer, Judit et al. 2021. Disinformation and Propaganda: Impact on the Functioning of the Rule of Law and Democratic Processes in the EU and Its Member States: – 2021 Update –. European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2021/653633/EXPO_STU(2021)653633_EN.pdf (June 15, 2022).

Fagan, Craig, and Lucas Wright. 2020. Research Brief: Ad Tech Fuels Disinformation Sites in Europe – The Numbers and Players. Global Disinformation Index. https://aej.org/wp-content/uploads/2021/04/GDI_Adtech_EU.pdf (June 15, 2022).

Global Disinformation Index staff. 2019. The Quarter Billion Dollar Question: How Is Disinformation Gaming Ad Tech? Global Disinformation Index. https://www.disinformationindex.org/research/2019-9-1-the-quarter-billion-dollar-question-how-is-disinformation-gaming-ad-tech/ (June 15, 2022).

Melford, Clare, and Craig Fagan. 2019. Cutting the Funding of Disinformation: The Ad-Tech Solution. Global Disinformation Index. https://www.disinformationindex.org/research/2019-5-1-cutting-the-funding-of-disinformation-the-ad-tech-solution/ (June 15, 2022).

5) Construct media monitoring systems sophisticated enough to capture the flows of disinformation across the whole of the information environment in multiple languages

Allen, Jennifer, Baird Howland, Markus Mobius, David Rothschild, and Duncan J. Watts. 2020. “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” Science Advances 6 (14)

Benaissa Pedriza, Samia. 2021. ‘Sources, Channels and Strategies of Disinformation in the 2020 US Election: Social Networks, Traditional Media and Political Candidates’. Journalism and Media 2(4): 605–24.

Evanega, Sarah, Mark Lynas, Jordan Adams, and Karinne Smolenyak. ‘Coronavirus Misinformation: Quantifying Sources and Themes in the COVID-19 “Infodemic”’. https://allianceforscience.cornell.edu/wp-content/uploads/2020/10/Evanega-et-al-Coronavirus-misinformation-submitted_07_23_20-1.pdf (June 15, 2022).

Tsfati, Yariv, H. G. Boomgaarden, J. Strömbäck, R. Vliegenthart, A. Damstra, and E. Lindgren. 2020. “Causes and Consequences of Mainstream Media Dissemination of Fake News: Literature Review and Synthesis.” Annals of the International Communication Association 44 (2): 157–73.

Unesco: Journalism, fake news & disinformation: handbook for journalism education and training, 2018 https://unesdoc.unesco.org/ark:/48223/pf0000265552

6) Enforce policies to protect journalists and strengthen media freedom

Council of Europe Committee of Ministers. 1996. ‘OF THE COMMITTEE OF MINISTERS TO MEMBER STATES ON THE PROTECTION OF JOURNALISTS IN SITUATIONS OF CONFLICT AND TENSION’. https://rm.coe.int/16804ff5a1 (June 15, 2022).

Council of Europe. 2015. ‘Recommendations and Declarations of the Committee of Ministers of the Council of Europe in the Field of Media and Information Society’. https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=0900001680645b44 (June 15, 2022).

Council of Europe. 2022. ‘Not a Target – the Need to Reinforce the Safety of Journalists Covering Conflicts’. https://www.coe.int/en/web/kyiv/-/not-a-target-the-need-to-reinforce-the-safety-of-journalists-covering-conflicts (June 15, 2022).

EU Recommendation on the protection, safety and empowerment of journalists; 2021 https://digital-strategy.ec.europa.eu/en/library/recommendation-protection-safety-and-empowerment-journalists#:~:text=Adopted%20in%20September%202021%2C%20the,intimidation%2C%20whether%20online%20or%20offline.

7) Implement policies for transparent decision making around content removal to ensure there are agreed-upon standards and processes for archiving content and data so it can be used and understood by prosecutors, policymakers, journalists, fact-checkers, researchers and historians

Abrahams, Fred. ‘When War Crimes Evidence Disappears: Social Media Companies Can Preserve Proof of Abuses’. Human Rights Watch. https://www.hrw.org/news/2022/05/25/when-war-crimes-evidence-disappears (June 15, 2022).

Al Khatib, Hadi, and Dia Kayyali. 2019. ‘YouTube Is Erasing History’. The New York Times. https://www.nytimes.com/2019/10/23/opinion/syria-youtube-content-moderation.html?referringSource=articleShare (June 15, 2022).

Dubberley, Sam, Alexa Koenig, and Daragh Murray, eds. 2020a. ‘How to Preserve Open Source Information Effectively’. In Digital Witness: Using Open Source Information for Human Rights Investigation, Documentation, and Accountability, Oxford New York, NY: Oxford University Press.

Human Rights Center at UC Berkeley School of Law. 2021. DIGITAL LOCKERS: Archiving Social Media Evidence of Atrocity Crimes. Human Rights Center at UC Berkeley School of Law. https://humanrights.berkeley.edu/sites/default/files/digital_lockers_report5.pdf (June 15, 2022).

Human Rights Watch. 2020. “Video Unavailable” Social Media Platforms Remove Evidence of War Crimes. Human Rights Watch. https://www.hrw.org/report/2020/09/10/video-unavailable/social-media-platforms-remove-evidence-war-crimes (June 15, 2022).

8) Put in place funding and support for the mental health and well-being of researchers, fact-checkers and journalists working on war reporting and disinformation investigations

Dubberley, Sam, and Michele Grant. 2017. Journalism and Vicarious Trauma. A Guide for Journalists, Editors and News Organisations. First Draft. https://firstdraftnews.org/wp-content/uploads/2017/04/vicarioustrauma.pdf (June 15, 2022).

Dubberley, Sam, Elizabeth Griffin, and Haluk Mert Bal. 2015. Making Secondary Trauma a Primary Issue. A Study of Eyewitness Media and Vicarious Trauma on the Digital Frontline. Eyewitness Media Hub. http://eyewitnessmediahub.com/uploads/browser/files/Trauma%20Report.pdf (June 15, 2022).

Ellis, Hannah. 2018. ‘How to Prevent, Identify and Address Vicarious Trauma — While Conducting Open Source Investigations in the Middle East’. bell¿ngat. https://www.bellingcat.com/resources/how-tos/2018/10/18/prevent-identify-address-vicarious-trauma-conducting-open-source-investigations-middle-east/.

9) Develop new ethical frameworks for civil society and government initiatives working to fight disinformation

Association for Computing Machinery. 2018. ‘ACM Code of Ethics and Professional Conduct’. https://www.acm.org/code-of-ethics (June 15, 2022).

The Media Councils Debates. Facing the Challenges of the Digital Age, January 2022 https://www.lecdj.be/de/projekte/media-councils-in-digital-age/–

Pancake, Cherri M. 2018. ‘Programmers Need Ethics When Designing the Technologies That Influence People’s Lives’. The Conversation. https://theconversation.com/programmers-need-ethics-when-designing-the-technologies-that-influence-peoples-lives-100802 (June 15, 2022).

Stanford University. 2015. ‘Computer and Information Ethics’. Stanford Encyclopedia of Philosophy Archive. https://plato.stanford.edu/archives/sum2020/entries/ethics-computer/#InfRicInfPoo (July 15, 2022).

10) Build an EU-wide pipeline of researchers, university centers, journalists, fact-checkers and other civil society groups with the necessary technical, linguistic and subject-matter knowledge to respond quickly to future information challenges

Bisson, Robin. 2022. ‘Academia Urged to Join Fight against Online Misinformation’. *Research. https://www.researchprofessionalnews.com/rr-news-uk-charities-and-societies-2022-1-academia-urged-to-join-fight-against-online-misinformation/ (June 15, 2022).

CBInsights. 2018. Who Will Lead The Fight Against Online Disinformation & Propaganda? https://www.cbinsights.com/research/fighting-online-disinformation-propaganda-conference/ (June 15, 2022).

Iaione, Christian. 2021. ‘Enabling Co-Governance CO4CITIES THIRD TRANSNATIONAL MEETING 25-26 November 2021’. http://www.comune.torino.it/benicomuni/bm~doc/iaione_co4cities-budapest.pdf (June 15, 2022).

Jukić, Tina et al. 2019. ‘Collaborative Innovation in Public Administration: Theoretical Background and Research Trends of Co-Production and Co-Creation’. Administrative Sciences 9(4): 90.