Organizations that contributed to this investigation: PagellaPolitica/Facta news; Les Surligneurs
In a letter sent to the EU Commission Executive Vice-President for tech sovereignty, security and democracy, Henna Virkkunen, US House Judiciary Chair Jim Jordan claimed that an EU regulation, the Digital Service Act (DSA), “requires that social media platforms have systematic processes to remove ‘misleading or deceptive content,’ including so-called ‘disinformation,’ even when such content ‘is not illegal’.“
The statement is formulated in a captious way and vehiculates a false message.
What is the DSA
The Digital Services Act is an EU regulation designed to enhance the accountability of online platforms by setting obligations on content moderation, transparency and user protection. It regulates “online intermediaries and platforms” and aims to “protect consumers and their fundamental rights online by setting clear and proportionate rules”, according to the statements on a dedicated webpage.
The DSA establishes clear responsibilities for platforms regarding the removal of illegal content and the mitigation of broader risks to society. Among these risks, the spread of disinformation is recognized as a systemic threat due to its potential to undermine democratic processes, distort public debate, and erode trust in institutions. The DSA therefore requires platforms not only to address explicitly illegal activities, but also to implement measures that proactively counteract the circulation of false and misleading information.
The European Commission has announced formal investigations for possible breaches of the DSA regarding the major online platforms. Notably, investigations have been initiated against Meta Platforms (Facebook and Instagram), X and TikTok to assess their compliance with the new transparency and moderation rules. As of now, none of them has reached definitive conclusions, no binding legal precedents have been established, and the framework is still in the early stages of implementation.
Why Jordan’s claim is wrong
The quotes in Jordan’s letter come from the Whereas 84 of the DSA. As it’s written in the text, platforms must “focus on the systems or other elements that may contribute to the risks” and “assess whether their terms and conditions and the enforcement thereof are appropriate, as well as their content moderation processes, technical tools and allocated resources”. It is in this assessment that the platforms “should also focus on the information which is not illegal, but contributes to the systemic risks identified in this Regulation”.
Then platforms, according to Whereas 84 of the DSA, must “pay particular attention on how their services are used to disseminate or amplify misleading or deceptive content, including disinformation”. Disinformation is usually legal, it is not against the law to claim for example that the Earth is flat, even if it’s demonstrably false. However, huge volumes of disinformation spread in a coordinated manner on a social media platform can pose a systemic risk to democracies. For example, if 48 hours before an election millions of voters are exposed to legal but demonstrably false content about a specific politician or political party.
Facing similar situations, the platforms must – according to Whereas 86 – “deploy the necessary means to diligently mitigate the systemic risks identified in the risk assessments, in observance of fundamental rights”. The deployed measures must be “reasonable and effective”, “proportionate” and “avoid unnecessary restrictions on the use of their service”. According to the DSA, in deploying these measures platforms “should give particular consideration to the impact on freedom of expression”.
The “removal” of legal content is never mentioned and it’s clearly not what the DSA asks the platforms about disinformation.
It is possible, of course, that platforms decide to remove legal content to mitigate the systemic risk of disinformation in specific situations, but it is false that the DSA requires them to have systematic processes to do so. It is quite the contrary: DSA requires (Whereas 87) platforms to adopt mitigating measures “for example, adapting any necessary design, feature or functioning of their service, such as the online interface design”, or “adapting their content moderation systems and internal processes or adapting their decision-making processes and resources, including the content moderation personnel, their training and local expertise”.
So, no systematic removals, but better interfaces and moderation systems, more resources, better training and so on. And always considering the impact on freedom of expression.
Whereas and articles
It should be noted also that in an EU regulation, a “Whereas” is not binding. It’s a recital explaining the reasons why the lawmakers decided to pass a regulation. Its purpose is first and foremost to justify the regulation. Another purpose is to interpret the binding provisions of the regulation. The Court of Justice of the European Union will probably be asked to assess whether a decision of the European Commission sanctioning Meta, TikTok or X is legal, and when doing so, the judges will read the provisions in the light of the recitals.
Which legally binding provision could then be used by the EU to fine tech companies? In its letter, in footnote n.6, Jim Jordan explicitly refers to Article 35 of the DSA (even if the quotes, as said, comes mostly from Whereas 84). This is the article that makes it compulsory for very large online platforms and search engines to put in place measures to mitigate systemic risks caused by online platforms. These risks, as said, range from the dissemination of illegal content to threats to electoral processes.
However Article 35, like Whereas 84, does not foresee measures such as removal of legal disinformation content, or “misleading or deceptive content” as Jim Jordan claims, nor “systematic processes” to do so. Removals are explicitly envisioned only for illegal content. According to paragraph 1) d) of Article 35, among the measures that platforms shall put in place to mitigate systemic risks, there are “content moderation processes, including the speed and quality of processing notices related to specific types of illegal content and, where appropriate, the expeditious removal of, or the disabling of access to, the content notified, in particular in respect of illegal hate speech or cyber violence […].”
Nowhere in this provision can be found a duty to remove legal disinformation content. And disinformation can’t be considered per se illegal: one of the core legal principles of EU law is that no one shall be accused of a crime that is not a crime according to the law. Nulla poena sine lege, as lawyers say. This fundamental right is enshrined in the Charter of Fundamental Rights of the EU (Art. 49): “No one shall be held guilty of any criminal offence on account of any act or omission which did not constitute a criminal offence under national law or international law at the time when it was committed.” Disinformation is illegal only when, aside from being false, a specific content (e.g. a statement) is violating specific laws: for example, if it constitutes the crime of defamation.
In conclusion
Jim Jordan’s claim that the DSA “requires social media platforms have systematic processes to remove” disinformation, even when it’s legal, is wrong.
The DSA explicitly states that illegal content should be removed and that platforms need to address systemic risks, including disinformation, with adequate measures. For example, “adapting their decision-making processes and resources, including the content moderation personnel, their training and local expertise”. Removal of content, and even more having systematic processes for removal of content, is never mentioned by DSA as a possible solution to address the systemic risk of disinformation.
Photo: Wikipedia, Gage Skidmore