(De)monetisation of Disinformation: Can the actions of large online platforms be measured?
Konrad Bleyer-Simon
Centre for Media Pluralism and Media Freedom (CMPF) and European Digital Media Observatory (EDMO)
Abstract:
Assessing how the spread of disinformation provides monetary rewards to the actors involved in the publishing and amplification of harmful messages is one of the key challenges when it comes to assessing the effectiveness of the Code of Practice of Disinformation. Providing a full picture is nearly impossible, given the multitude of channels through which the actors involved can collect revenues. In this text, we describe some of the considerations we had when preparing indicators to assess measures to demonetise disinformation, as well as the limitations of the currently existing approaches.
Key words: demonetisation, disinformation, Code of Practice, online platforms
Assessing how the spread of disinformation provides monetary rewards to the actors involved in the publishing and amplification of harmful messages is one of the key challenges when it comes to assessing the effectiveness of the Code of Practice of Disinformation. Providing a full picture is nearly impossible, given the multitude of channels through which the actors involved can collect revenues. In this text, we describe some of the considerations we had when preparing indicators to assess measures to demonetise disinformation, as well as the limitations of the currently existing approaches.
Disinformation can be a business â we know this at least since âMacedonian teensâ (the country is now called the Republic of North Macedonia, and the teens are now twentysomethings) started earning money through the ads that were placed next to the fabricated stories they published during the Trump campaign in 2016. Sure, no one expected the employees of troll factories to cause mischief for free, but at this point it was clear that any small entrepreneur can expect ample revenues from disinformation, as large online platforms seem to reward this behaviour in their monetisation programmes.
But why is disinformation so easy to monetise? According to the definition of the European Commissionâs 2022 Strengthened Code of Practice on Disinformation (the Code), âdisinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harmâ. While this definition helps us understand why the Code requires action against this kind of content, if we want to understand why it is easier to monetise than almost any other kind of content, we need to change our definition: we need to look at disinformation as cheaply produced (made-up) content that is not yet illegal (and as such, there is no legal requirement to remove it, but, as the Code of Practice points out, it has immense potential to cause harm to society).
This means that (intentional) falsity alone is not the reason why disinformation sells. It is the ease of production and the misuse of protections provided to free expression that allows the authors/sources of disinformation content to get out their messages â and to reach large audience, the skilled ones use tactics known from low quality, but high reach journalism (the buzzwords being sensationalism, controversy, appealing to emotions, clickbait, churnalism, SEO optimisation, and so on).
As in the case of low-quality journalism (but also in the case of many serious news outlets), the main (and most visible) source of revenue is advertising. In 2021, the Global Disinformation Index (GDI) estimated that some well-known purveyors of disinformation made at least USD 76 million through online advertising services (apart from Google, the report mentions Criteol, Taboola, OpenX, Xandr in the top5). Some outlets known for frequently publishing disinformation, such as RT (formerly Russia Today), Sputnik, and Breitbart earned more than USD 700.000 per month (a presentation to the European Parliament, based on the report, can be found here). The ads that ran on these sites were often those of well-known brands, such as travel agencies, banks or car manufacturers (whether these companies knew next to what kind content their advertisements were shown is not known[1]). There are indications that GDI only captured the tip of the iceberg: according to the Statista Research Department, publishers of âmisinformationâ (in this context, the term refers to almost the same phenomenon, but the formulation allows to disregard the intention behind publishing fabricated content) earned USD 2.6 billion in programmatic advertising worldwide in 2021 (which is close to 1.7 percent of the overall spending on programmatic advertising). The numbers get even more troubling if we compare the ad revenues of disinformation actors to online news media. According to NewsGuardâs estimate âfor every $2.16 in digital ad revenue sent to legitimate newspapers, U.S. advertisers are sending $1 to misinformation websitesâ (there is no such comparison for the EU, but one can expect similar ratios).
The issue gets more complicated if we consider that purveyors of disinformation are not the only ones making money from disinformation. In fact, there are indications that the business models of online platforms are optimised towards reaping benefits from harmful content. In a recent paper, Carlos Diaz Ruiz argues that the algorithms of online platforms prioritise the content that has the most potential to engage audiences â this usually includes posts that are controversial and aiming at triggering and exploiting emotions, just like most of the disinformation content we encountered in the past years, around campaign topics, vaccines or the Russian invasion of Ukraine. The same conclusions can be made based on a 2019 internal report from Facebook, in which the author warned the companyâs management that engagement-based metrics, that determine which piece of content a user might see, enabled accounts that misrepresent their identity to reach a mass audience made up mostly of people who never chose to see that content (the leaked document can be read here).[2] Metaâs CEO Mark Zuckerberg also (inadvertently) admitted that disinformation is a relevant driver of engagement: in the 2018 note titled âBlueprint for Content Governance and Enforcementâ he included a graph that implies that the closer a piece of content gets to illegality, the higher its potential for engagement.[3] This benefits online platforms because more engagement means more attention and more time spent on platforms. Not to mention that platforms get a cut from every advertising dollar or euro spent on their services[4] â no matter if a purveyor of disinformation pays to increase its reach or if a purveyor of disinformation hosts advertising as part of a monetisation programme.
One of the important pillars of the Code of Practice on Disinformation is the Scrutiny of Ad Placements. Its main commitments ask signatory online platforms to deal with cases when purveyors of disinformation make money with or spend money on advertising; namely: a) âdefund the dissemination of Disinformation, and improve the policies and systems which determine the eligibility of content to be monetisedâ as well as b) âprevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messagesâ.[5]
Platforms that signed up to the Code (and its relevant commitments) report about their actions taken to defund disinformation actors (their reports can be found here), but in addition to that, the Code of Practice also asks for the development of so-called Structural Indicators, âin order to assess the effectiveness of the Code in reducing the spread of online disinformation for each of the relevant Signatories, and for the entire online ecosystem in the EU and at Member State level.â In EDMOâs proposed methodology (the second, strengthened proposal can be accessed here) for these indicators, we included an initial set of questions that help assess the extent to which advertising services are used to either monetise or amplify disinformation, as well as track changes over time.
In the proposal, our starting point was the Codeâs focus on advertising and the consideration that the structural indicators can, at this point, only look at the services of signatories. For this reason, we need to highlight that the current proposal is limited. Monetisation and demonetisation raise too many concerns, among other things because only a subset of disinformation is monetised in ways that can be measured as part of this exercise.[6]Â Moreover, monetisation is a two-way street: while purveyors of disinformation can generate revenues for themselves through popular and widely shared disinformation content, platforms themselves can profit from the involvement of purveyors of disinformation in their monetisation programmes, as well as from the traffic generated by these accounts and contents â not including these aspects can thus be misleading, as disinformation can continue generating profits for many involved actors even without having access to advertising services.
Thus, in the current context, we see the only feasible â but still very limited, and thus not recommended â approach to assess this indicator on the basis of a random sample of monetised content, a random sample of public content, and a random sample of paid (political and general) advertising messages weighted by views (and adapted to the population size of the given member state). Such an approach could also fit into the methodology used by TrustLab to conduct its beta assessment of the structural indicators.
Based on the sample of monetised content and public content, the assessment could look at:
(The assessment should also look into whether disinformation content identified during the assessment of prevalence of disinformation was relying on paid amplification to extend its reach.)
Based on the sample of advertisements, the assessment could look at:
- Share of disinformation in sample of ads
- Share of known purveyors of disinformation in sample of ads
(A separate assessment can be made for boosted content, ie. content that was part of paid amplification schemes, in case it is not considered advertising.)
Based on the metrics highlighted, we could not only track the number/share of monetised disinformation/misinformation content, but the revenues generated with the content in the sample could also be estimated. As such, the indicator could capture (and allow comparisons, from one assessment period to the other, of) the EUR values of revenues generated by purveyors of disinformation, how these revenues were shared between purveyors of disinformation and platforms (where applicable), as well as the share of revenues generated through the monetisation of disinformation in the overall revenue generated in the sample. The assessment would be most accurate if platforms provided researchers with the amounts spent/received,[9]Â but estimates are also possible based on average costs per impressions on platforms.
While this method would make it possible to quantitatively assess some aspects of (de)monetisation, and thus test the Code of Practiceâs effectiveness (or signatoriesâ compliance), it needs to be emphasised that ads placed next to disinformation content or paid for by purveyors of disinformation is just a part of the business. If platformsâ business models are optimised to benefit from increased engagement, a lot of the rewards for big tech companies manifest indirectly: as revenues generated through increased activities and time spent on platforms, as well as the additional data collected from these users.
In addition, as mentioned before, the monetisation services of platforms are not the only options purveyors of disinformation have to access funding. While Google/Alphabet is the greatest ad tech provider, GDI showed that there are plenty of other companies that can serve the needs of purveyors of disinformation â and might be happy to step in (with off-platform services) if the Codeâs signatories demonetise some purveyors of disinformation. Developing proxies to estimate the extent of disinformation on their services is possible, the same way as described above, but it would involve assessments of non-signatories (ad tech companies or advertisers), which is at the moment outside of the scope of the current structural indicators. Moreover, as the Code also acknowledges, there are many other sources of revenue besides advertising (many of which were already documented by the disinformation community). Some of these can be tracked and quantified (such as crypto-donations or crowdfunding on major platforms, but would again require looking beyond signatories), others will most likely stay hidden (such as sponsorships). What is common in them is that all sources of revenue benefit from engagement-based ranking â meaning that removing bad actors from platformsâ advertising services will hardly solve the problem, as long as publishers of harmful content can reach their audiences the same way as before. An indicator based simply on advertising services will therefore contribute to a false sense of security.
To really understand what is done to tackle the business of disinformation, we need to complement the proposed indicators with additional metrics that can assess platformsâ algorithms and their underlying business models. Our EDMO partners at the Integrity Institute have already extensively published about the harms of engagement ranking (most recently on Instagram) and looked into the prospects of alternative forms of content ranking. This exercise gets even more important if we take into consideration that under the Digital Services Act, platforms (Very Large Online Platforms and Very Large Online Search Engines) will have to conduct so-called ârisk assessmentsâ to understand whether their services can be exploited by bad actors, including purveyors of disinformation. Thus, if it turns out that a platformsâ recommender systems indeed contribute to societal harm, the platform providers need to take action to mitigate the risk â having structural indicators look into this question could contribute to a better risk assessment and guide effective mitigation measures. In the coming months, EDMO will aim to understand with their help how progress in this domain can be tracked, to make sure that the online environment wonât remain a place to make easy money with harmful content.
(This post is based on EDMO Task 5âs work on structural indicators, especially the paper authored by Iva NenadiÄ, Elda Brogi, Konrad Bleyer-Simon and Urbano Reviglio, as well as on the EDMO training on the economics of disinformation, held by Paula Gori and Konrad Bleyer-Simon).
[1] However, it is possible that the inclusion of disinformation sites is âthe result of trade-offs that marketers have made over the past decade while pursuing the promise of programmatic advertising: more scale, more reach, and lower costsâ, argues Claire Atkin of the Check My Ads Institute. Proper auditing could go a long way here.
[2] Recently, the academics Mariana Mazzucato and Ilan Strauss referred to âalgorithmic attention rentsâ.
[3]Â While low quality, controversial posts tend to get a significantly higher number of likes, comments, and shares than posts by mainstream news media or public authorities, that doesnât necessarily mean that disinformation has a greater impact on opinion formation than other sources of information or that it has the same prevalence in all segments of society. That would be the subject of a different investigation.
[4] In some cases, they get more than just a cut. According to an investigation by Wired, Meta earned at least USD 30 million from the monetised content of inauthentic accounts that were later removed from its platforms â as deplatformed entities cannot get paid, the whole money went to the platform.
[5]Â In addition, it asks for increased cooperation between relevant players in the advertising market.
[6]Â Other sources of revenues include crowdfunding, donations, payments made through tokens or cryptocurrencies, the sale of merchandise, e-commerce activities, the sponsoring of influencers or any kind of back-channel payment.
[7]Â We use the term disinformation here, but given the difficulties of proving intention between the publishing and sharing of misleading content, the assessment could also start with a focus on misinformation (in this context meaning both intentionally and unintentionally shared content), the factuality of a piece of content can be determined with the help of trained fact-checkers.
[8] To determine who falls into this category, Fletcher et al. (2018) worked with a list of pre-identified purveyors of disinformation while TrustLab looked at the characteristics of publishers behind the content that was included in its sample to determine whether it can be considered a possible disinformation actor (number of misinformation or disinformation posts published, number of followers, etc). A possible problem with such approaches is that they make presumptions about the possible intent behind a published post, simply based on characteristics of the publisher. At the same time, we can (and perhaps should) make the argument that people who artificially boost or want to earn money with their content have greater responsibility than regular users of online platforms, and as such, one could expect from them to look into the factuality of their content before publishing â as such, the term purveyor of disinformation or disinformation actor (the latter was used by TrustLab in its assessment) can be justified.
[9]Â However, it is questionable whether this will take place in the near future. According to the report of the signatory IAB Europe (the Interactive Advertising Bureau Europe â an organisation that represents the digital advertising industry) on the service-level measures taken to fulfil commitments, in the Ad Scrutiny Subgroup of the Codeâs Taskforce âit was acknowledged that reporting financial values would be challenging for all signatories due to their varied positions in the advertising chain. To address this challenge, the group agreed to work in arriving at an agreed-upon conversion factor that will allow all signatories to translate media metrics into financial values.â