Let’s stop using emotions as a weak spot: op-ed by Paula Gori published by Tagesspiegel Background – Digitalization & AI
Spreaders of disinformation still greatly benefit from the way algorithms are trained, using their thirst for engagement against our societies. Urgent action along a whole-of-society approach is needed, writes Paula Gori, Secretary-General of the European Digital Media Observatory (EDMO) in an op-ed published by Tagesspiegel Background. The text was originally published in German on 26 September, ahead of Ms. Gori’s participation in DisinfoCon 2024. The English version can be read below.
When Facebook was built in 2004, a few years after the creation of Google Search, it was addressed to Harvard students. Today, it has around 2.9 billion monthly active users in the world, and many other online platforms are accessed by users every second. The concept of “community” was moved online and was definitely enlarged.
If a community is relatively small, interactions are quite easy to manage – especially if, as it was at the very beginning, the community also engages offline (say on campus). The moment it reaches a scale such as the ones reached by online platforms, not only is it harder to manage that scale, but also behaviors change, with restraints fading.
The core business model of these platforms is based on advertising. This is not something new, especially to the media sector. Algorithmic architectures are built in a way that favors online engagement – the more content creates engagement, the more it creates appetite for advertisers, the more gains for platforms who are competing in the attention economy. Machine learning is built to win this race.
User engagement: the gold standard for algorithms – and for disinformation actors
Algorithms used by AI are trained with behavioral patterns and users’ data with the aim of creating engagement. Algorithms have learned that (especially negative) emotional content, sensationalism and divisive content create engagement. As the algorithm gets rewarded if it creates engagement, it applies what it learns.
As the EDMO analysis of disinformation across the EU shows, malign actors employ precisely the same knowledge to disseminate harmful content. In turn, harmful content has become an increasingly prominent part of the online world.
The bandwagon effect (or if you want the echo chamber) of such content creates domino effects in the offline world. An EDMO analysis revealed widespread disinformation during thirteen recent European election campaigns, particularly about the electoral process, with false narratives often delegitimizing elections through unfounded claims of voter fraud and unfair practices. Such content directly impacts our ability to make informed decisions. Similarly, such content impacts climate action, decisions on our health, the safety of migrants and minorities, the protection of minors, etc.
Potential and limitations of current content moderation mechanisms
The architecture of online platforms also includes algorithms aimed at verifying that content does not violate the law and the terms and conditions. Content detection and action on it is thus either moderated by an algorithm, or it is first filtered and scored by it before a final decision is made by humans, raising labor concerns regarding exploitation and mental health issues on the human side, and a risk of biases on the machine side.
As such, content moderation is both contributing to the amplification of harmful and of the alleged solution. In both cases, it is in the hands of algorithms. There are algorithms which amplify and propose content and there are algorithms which detect harmful content. In both cases, the risk for violation of fundamental rights (among others, media pluralism in the first case and freedom of expression in the second) is high.
Language and context are key when it comes to content moderation. While for largely spoken languages like English, platforms can count on large datasets to train the algorithm, this does, however, not apply to less spread languages, creating discrimination between users.
Context and cultural awareness are also hard to get for algorithms. Civil society organisations use content to denounce atrocities and human rights violations, but without knowing the context, algorithmic moderation may prevent that content from getting posted. Without perceiving the nuances of the language, culture and context, there is a high margin for error.
Two further challenges are, first, the lack of a common definition of harmful content. Each platform has its own definition, policy makers struggle in agreeing on one and different legislations have adopted different definitions, confusing also the platforms. While for illegal harmful content the boundaries are somehow clearer, the situation becomes more difficult for legal but harmful content such as disinformation. In 2023, the World Economic Forum provided a list of content that, according to its Global Coalition for Digital Safety, is to be considered harmful. It includes disinformation, scams, algorithmic discrimination, child sexual abuse and exploitation material, content inciting and promoting violence hate speech, among others. Legal certainty comes with clear definitions.
An additional issue comes from secrecy over algorithms, and thus lacking awareness about how these algorithm networks are trained, with which data, based on which instructions, etc. This secrecy applies also to vetted researchers and auditors.
Urgent action along a whole-of-society-approach is needed
How to solve all this is widely debated and needs a concrete approach considering that human rights are at stake. Not one single solution, but rather a puzzle of different solutions working together are needed. This multistakeholder and multidisciplinary approach is also the core of EDMO, which focuses on tackling disinformation and counts on national and regional hubs covering all EU member states to ensure that local languages, cultures and specificities are reflected. The many pieces composing this puzzle include:
- Reaching common definitions of harmful content and different subcategories that are to be adopted by both policy makers and platforms. If different rules are applied by different platforms, not only will users be confused but their perception of the value of fundamental rights will be weakened.
- Transparency and accountability on how algorithms work and are trained, and platforms’ agreement on a level of decisional transparency to be guaranteed to the users. Vetted researchers and auditors have to be allowed to implement accountability, and users have the right to know (and in case appeal), how and why their content was moderated.
- When content is not illegal, content removal should be limited and rather be labelled and not algorithmically amplified. Algorithms shall be trained differently.
- Media and information literacy must be heavily promoted at all levels of society and age and seen as a lifelong learning process. Psychological support and community building offline are also fundamental.
- Collaboration with independent fact-checking and cooperation with relevant authorities when it comes to illegal content is important.
In the EU, the Digital Services Act, is a milestone piece of legislation which addresses a number of these issues. In light of respecting fundamental rights, it is based on assessments of systemic risks stemming from the design or functioning of the platforms. Clearly, the effectiveness of this Regulation will be assessed in a few years.
Finally, let me conclude with a provocative question to all of us: how much do we value our right to get informed? Would we be willing to monetary pay (as we did for decades) to access information?