Scroll Top

Blog Posts

Countering disinformation: a whole-of-society approach beyond traditional frameworks

Countering disinformation: a whole-of-society approach beyond traditional frameworks

Paula Gori
Secretary-General and Coordinator of EDMO

Abstract:

The approach to disinformation is a whole-of-society one, with the main actors being those who are more involved in the debate such as media literacy experts, fact-checkers, academics, journalists, policy–makers, etc. But what if we enlarge the group and also look at other sectors, such as finance?

Key words: regulation, fact-checking, research, media literacy, journalism, finance

 

Disinformation is a complex phenomenon and the action to counter it should reflect such complexity. Indeed, as it is often repeated, there is not one single silver bullet, and the response needs to be a whole-of-society one. Any action must respect fundamental rights and there is no space for a ministry of truth. As disinformation impacts our democratic right to make informed decisions, countering it is key. Quoting Hannah Arendt: “And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge.”

Disinformation is not a new phenomenon, but what changed with the use of online platforms is the speed, reach and techniques. The technological developments and the rise of disinformation also went in parallel with the rise of populism with the three elements being somehow the cause and the consequence of the other. Indeed, disinformation can be seen as the symptom of a sociological, political and economic reality which, jointly with the use of social media, may create fertile ground for disinformation to be produced and shared.

The current whole-of-society approach includes some main areas of intervention and their respective actors.

Independent fact-checkers verify if a given content is based on facts. For example, by verifying the information stated in a political speech or by verifying if an image is attributed to the right context. Verifying if content is linked to facts is at the heart of their work. Opinions, which are interpretations given to facts, are not included. In independent fact-checking articles, readers are guided (with hyperlinks and history) through the search of evidence. The work of fact-checkers can be seen as an ex-post intervention, it clearly is. But not only. Indeed, fact-checkers acquired the data and ability to also identify possible upcoming disinformation narratives. As such, they can contribute to creating pre-bunking and inoculation campaigns.

Civil society organisations are important players as well. Their work is key in shaping policy in the field, in advocating in the interest of citizens and human rights, in holding online platforms accountable. They also carry out fundamental work on disinformation narratives and actors, with open-source intelligence investigations (OSINT) being of extreme value and importance.

Another piece of the comprehensive response to disinformation is media literacy. Regularly providing citizens (of all ages) with the updated instruments to critically digest information and be aware of how the offline and online media ecosystems work, helps building societal resilience to disinformation and misinformation (when misleading content is further shared without the intention to create harm).

Independent quality journalism does also play a role. The traditional media sector was strongly impacted in the last decade, with the need to look for alternative business models. As a consequence, issues such as click-bating and speed in publishing the news, have in some cases contributed to the spread of disinformation (e.g. misleading headlines and non-verified information). However, the quality of independent media can pay off and significantly contribute to bringing back trust and to counter disinformation.

Research is crucial both to understand the disinformation phenomenon and for accountability purposes. Access to the data owned by the very large online platforms and search engines is key for researchers, who currently can access some public data and who should soon be able to also access private data (with all due data protections). In this regard, the work done by EDMO and led by Prof. Rebekah Tromble is pioneering. Thanks to this access, researchers will be able to zoom in on the characteristics of the spreaders, the techniques and content (which will, among others, inform policy) and to verify information provided by the very large online platforms and search engines under regulatory tools such as the Code of Practice, soon Code of Conduct under the DSA (starting with the application of the structural indicators).

Regulatory responses are also important. In the EU we started with self-regulation (the Code of Practice on Disinformation) and we are currently moving towards co-regulation, with the Code of Practice becoming a Code of Conduct under the Digital Services Act (which, let’s remind it, is a Regulation). The focus is not on the content per se, but on the infrastructure. Indeed, as we said, fundamental rights are to be respected and a ministry of truth is to be avoided. Other policy instruments contribute to the response as well, such as the Audiovisual Media Services Directive, the Digital Markets Act, the European Media Freedom Act and the AI Act.

Up to date communications strategies are also relevant. We know that images have a stronger emotional and long-lasting impact than text. This is something spreaders of disinformation play a lot on. Not by chance, we see lots of disinformation shared with images put out of context. As a consequence, it is important to adapt communications strategies to this, especially in the context of the current attention economy.

The whole-of-society approach also means that all these experts do not work in silos, but rather in dialogue. Their work is complementary. We can always do even better of course. As a matter of example, it is advisable to have engineers, lawyers, human rights experts and psychologists being in regular dialogue from the very beginning of the development of new tools.

Now, beyond the above-mentioned actions which are traditionally seen as elements of the disinformation response ecosystem, it would be interesting to focus on areas which go beyond the traditional focus, such as the finance sector.

Online platforms are listed companies and have a commitment towards their shareholders. The latter can have a strong say on the policies and actions of such companies, including voting against specific activities within the annual general assembly.  As such, shareholders have quite a say and may have an impact on business and policy decisions of online platforms and search engines. As a matter of example, Cometa, an Italian pension fund and shareholder of Microsoft, has asked for more transparency on the risks for disinformation coming from Artificial Intelligence at the last general assembly.

Another example where the finance sector could have an impact is related to disinformation on climate change. As a matter of fact, the last years saw a rise in the interest in sustainability policies by financial players, with ESG (Environmental, Social and Corporate Governance) criteria being key features. Investors want to put money into businesses which are environmentally and socially sustainable and many companies are trying their best to match this request. They are, among others, committed to GHG (greenhouse gas) reduction and to transparently report on it. Such reporting is traditionally covering 3 scopes: 1) direct emissions from sources owned or controlled by the company; 2) indirect GHG emissions from the generation of acquired and consumed electricity, steam, heat, or cooling; 3) All indirect GHG emissions (not included in scope 2) that occur in the value chain of the reporting company, including both upstream and downstream emissions.

As disinformation on climate change impacts climate action, this could be considered as an indirect emission. What if online platforms include the impact of disinformation on climate change within their Scope 3 Indirect GHG Emissions reporting? In other words, would they score better if they provide evidence of the fact that they ensure reduction of disinformation on climate change on their services? Would that be an incentive to mitigate the risks arising from the design or functioning of their services?

Finance is an example of a sector which is less considered when looking at the measures which form the counter disinformation ecosystem, and there may be others (e.g. the argument of the right of not being manipulated and to mental health is more and more present in the public debate).

The right to make informed decisions and a commitment to save the planet and make it a fair one is key for many, and it may be worth joining forces.