Scroll Top

Publications

“No Violations Found.”  Europe’s Digital Safety Law Fails When Users Report Content

Europe’s Digital Services Act (DSA) was expected to become a tool that would force social media giants like Facebook and TikTok to follow their own rules and make the online space safer. However, an experiment reveals that, in practice, platforms often treat user complaints about illegal content as a formality. In the Baltics, they respond almost exclusively to regulators’ requests, if at all.

This article was originally published by Re:Baltica, with the contribution of Delfi Lithuania, on November 7

The DSA came fully into force in February 2024, although the largest platforms had to comply with certain parts of it as early as August 2023. The law’s primary goals are to curb the spread of illegal content online, reduce fraud, and make digital environments safer for users.

It was clear from the start that this first EU attempt to regulate the digital “Wild West” would require time to take hold. However, two years later, when researchers from the Latvian think tank Providus decided to test how it works, they found that, at least when it comes to user complaints, the law still doesn’t function.

The DSA requires that users should be able to report illegal content quickly and that platforms must act immediately when the reported material involves incitement to violence, hate speech, or other serious harm.

Providus researchers posed as ordinary users and submitted 21 reports in total to Telegram, TikTok, YouTube, and Facebook. Reports were about posts, mainly in Russian, containing hate speech, disinformation, and open calls for violence.

Then they waited for the response.

Nothing happened.

On Telegram, a post declaring “Nazis we need to destroy. Physically” remained untouched. On TikTok, a deep-fake video of Ukraine’s president shouting “Glory to Russia!” stayed online. YouTube kept clip calling Latvia’s prime minister a “stupid woman,” and Facebook left up a detailed torture fantasy directed at migrants. 

“Out of 21 reports, not a single case was handled effectively,” concluded Iveta Kažoka, a researcher at Providus, in an interview with Re:Baltica. The platforms either ignored the reports or responded with automated replies: “No violations found.”

Kažoka believes the experiment exposes how vulnerable the Baltic states remain in the information war that accompanies Russia’s invasion of Ukraine. In Latvia and Estonia, Russian speakers make up about a quarter of the population. In Lithuania, the share is smaller, but since the full-scale invasion of Ukraine, the number of Ukrainian and Belarusian refugees, many of whom consume content in Russian, has grown sharply. With Kremlin-controlled television banned in the Baltics, these communities have become heavily dependent on social media. When major platforms fail to address harmful content effectively, it poses a significant risk to the region’s information security. 

Diverse Approaches to the DSA in the Baltic States

The three Baltic countries – Estonia, Latvia, and Lithuania – have taken different routes in implementing the Digital Services Act (DSA) during its first year. 

The DSA requires every EU member state to appoint one institution as its national Digital Services Coordinator.

 In Latvia, this role is held by the Consumer Rights Protection Centre (PTAC). The coordinator oversees how the law is applied, certifies organisations whose reports must be prioritised by platforms, and coordinates the enforcement of takedown orders issued by other competent authorities. In total, twelve Latvian institutions are involved in implementing the DSA, including the media regulator, the gambling inspectorate, the anti-corruption bureau, and the data protection authority. PTAC can also issue administrative orders related to consumer protection. Together, they form a national network responsible for enforcing the law.

Institutions may contact online platforms directly without going through PTAC. For example, if the media regulator decides to block access to a website, it informs Google and then notifies PTAC, which records the decision in the EU information system. If the regulator encountered problems enforcing its order, PTAC would contact the platform and, if needed, inform the regulator in the country where the platform is registered. Most are based in Ireland. PTAC also reports cases in which platforms refuse to comply with the Latvian authorities’ orders to the European Commission, which can lead to infringement proceedings.

In 2025, PTAC was notified of 288 orders issued by other Latvian authorities, mainly concerning illegal gambling sites, misleading advertising, or copyright violations. Most of those cases were resolved.

PTAC also addresses regular user complaints directly. This year, it received 56 individual complaints, the majority of which were about blocked social media accounts or lost access to content, rather than hate speech or disinformation. According to PTAC representative Dainis Platacs, the biggest challenge in resolving these cases quickly is that most platforms are not based in Latvia.

“Out of 56 complaints, a few were solved because we personally intervened,” Platacs said. “But later the platforms began rejecting our requests for clarification, saying: ‘Read Article 53 – you don’t have that right.’”

When that happens, PTAC forwards the complaint to the Irish coordinator, but obtaining results is difficult, as the Irish authority is heavily overloaded with similar cases from across Europe.

Two in One in Estonia

In Estonia, the national consumer protection authority serves both as the coordinator and the enforcement body. It can not only register but also issue removal orders itself. Unlike in Latvia, it also directly monitors social media, with a primary focus on online fraud.

“We have one specialist whose job is to scroll social media eight hours a day,” said Helen Rohtla, the official responsible for implementing the DSA in Estonia.

In 2024, Estonia reported 800 cases of online fraud to Meta, most of which involved fake investment ads featuring photos of well-known Estonians. “Facebook always responded, but it takes a week or two,” Rohtla said. TikTok reacts within days. “Telegram is the worst,” she added.

To speed up reactions, the Estonian coordinator has an informal agreement with official Facebook fact-checkers at Delfi.ee. When the authority spots a fraudulent ad, it alerts the fact-checkers, who can quickly label the post as misleading. “Of course, new scam ads appear very quickly,” said Marta Vunš, a fact-checker at Delfi.ee. “But at least for those few days while the ad is visible, we can save some people from being scammed.”

Fraudulent ads on Facebook are also regularly detected and labelled by Re:Baltica’s fact-checkers in Latvia, though not always successfully, said Re:Check editor Evita Puriņa.

“This summer, several quite sophisticated deepfakes appeared on Facebook,” she said. “In the videos, created with artificial intelligence, well-known Latvian doctors and journalists claimed that vaccines kill and that vaccinated people are walking corpses. The videos linked to a fake Health Ministry website and an unregistered dietary supplement ad asking people to enter personal data. Thousands had shared the videos, but it was impossible to add a false-content label.”

“I wrote to my contact person at Meta, pointing out that these videos violate the platform’s own guidelines and should be removed. They promised to review it. A month later, I got a reply saying no violations were found.” Puriņa forwarded the links to the Latvian DSA coordinator, who advised her to file a formal complaint with supporting evidence.

For the Estonian coordinator, a small victory came when it issued an order under the DSA instructing Telegram to block access to 14 Russian media outlets included in the EU sanctions list. The company complied, but only after three months. “Three months is not acceptable when we’re talking about sanctioned content,” Rohtla said. Still, it set a precedent – a small national regulator compelling a global platform to act under EU law. The sanctioned channels are no longer accessible anywhere in the EU.

The First Trusted Flaggers In Lithuania 

In Lithuania, the national coordinator is the Communications Regulatory Authority, an independent body that oversees electronic communications, postal services, and other digital sectors. Like in neighboring Baltic countries, the DSA responsibilities are shared among several authorities.

In its first year, Lithuania issued just over 50 removal orders and became the first Baltic state to certify a trusted flaggers –  a trade association monitoring illegal pesticide sales, Piraceymeter, and Debunk EU. Estonia now has two trusted flaggers focused on copyright issues, whereas Latvia does not. Representative of Latvia’s coordinator Dainis Platacs explains this by the strict eligibility requirements. They had approached several non-governmental organizations, but none had shown interest so far.

 “It’s not that simple, it requires resources,” Platacs said. Re:Baltica has learned that the organization Drošs internets (“Safe Internet”), which focuses on protecting minors in the digital environment, plans to apply soon for trusted flagger status.

Testing the System

Providus’s two-month monitoring project was designed as a stress test for the DSA, a way to see how its promises work when the person filing a report isn’t a regulator, but an ordinary Latvian user. Between August and October 2025, researchers flagged twenty-one cases of potentially illegal or harmful Russian-language content on Telegram, TikTok, YouTube, and Facebook. These platforms were chosen as a follow-up to the monitoring project they conducted last Spring.

Each post was chosen for its apparent violations of either Latvian law or the platforms’ own rules.

The Telegram demonstrated the weakest performance, as its reporting mechanism was virtually non-functional. Researchers monitored content from the channel Baltijas antifašisti (“Baltic Antifascists”), which included posts calling to “destroy Nazis physically” and referring to Latvians as “urodci” (“freaks”) and “banderovci”, a derogatory term widely used in Russian propaganda to describe Ukrainians and their supporters. Providus submitted all five examples through Telegram’s DSA reporting form, but the system repeatedly returned error messages or failed to register the reports. None of the posts were removed.

TikTok responded to the reports within minutes, but all five replies were automatic: “No violations found.” One of them concerned the Zelensky deepfake, a clear breach of TikTok’s own AI content policy, which requires labeling synthetic media. The video was taken down only after Re:Baltica asked “Why”.

Meanwhile, the YouTube channel of an individual “Matajev Dmitrij” distributed hate-inciting content, repeatedly using derogatory and dehumanizing expressions against Latvians and public officials (e.g., “stupid Prime Minister Siliņa”). YouTube confirmed receiving the report, but a month after the complaint, no decision was announced, and the content was still accessible.

Facebook (Meta) provided the most developed system, complete with acknowledgments and appeals, yet dismissed every case. Only one of six flagged posts was eventually removed, and even then, the user was never told. The post in question,  the only one written in Latvian, described in graphic detail how to torture and kill migrants. It suggested using methods inspired by the Vietnam War, such as hidden pits with sharpened wooden stakes smeared with feces, or using FPV drones for “entertainment hunts.” Initially, Facebook refused to remove the post and rejected an appeal, but after some time, the content disappeared without explanation

“Formal procedures exist,” the Providus researcher Iveta Kažoka concluded, “but the substance of evaluation is absent.”

Providus’s findings echo similar results elsewhere in Europe. A 2025 German study found that only 2 to 11 percent of users across major platforms even managed to access the DSA-specific reporting tools; more than a quarter abandoned the process halfway through. The forms were either difficult to find, overly technical, or simply broke in the middle. 

Elections and What Comes Next

For Estonia, the first real-world test of the DSA came this autumn with its municipal elections. Anticipating online manipulation, the Consumer Protection Authority organized roundtables with police, border guards, and election officials, along with representatives from Meta, TikTok, and X. A 24-hour hotline was established for candidates and authorities to report incidents.

To everyone’s relief, there were no significant problems. “We were prepared for much worse,” said Helen Rohtla.“We saw a few fake profiles imitating authorities and AI-generated posts, but once we flagged them, Meta and TikTok removed them quickly.”

Latvia’s Consumer Protection Centre applied a similar approach during its municipal elections in 2024. “We wanted every institution to know who to call if something serious appeared online,” said Dainis Platacs, Latvia’s DSA coordinator. “We even had direct contacts from Meta and TikTok at the table. Only Telegram refused to engage.”

The Corruption Prevention and Combating Bureau, or KNAB, which oversees political advertising, informed Re:Baltica that communication with most major platforms, except Telegram, has been “generally successful”, although so far it has not needed to request the removal of suspicious content.

The State Chancellery, responsible for coordinating government communication, also confirmed regular contact with social media companies. It stated that officials frequently request that Meta and TikTok remove false or manipulated content, including fake accounts impersonating ministers, the National Armed Forces, or other state institutions. “Such cases have not been frequent recently,” the representative of the Chancellery noted. 

Lithuania and Estonia face one more challenge: most disinformation and deepfakes are not explicitly illegal. 

“If someone makes a deepfake of our president saying nonsense, we can only tell the platform it violates their terms. It’s up to them whether to act,” Helen Rohtla from Estonia explained. 

Latvia has gone further by tightening its election law: any paid campaign material using AI-generated images, audio, or video must clearly state it, and automated campaigning through fake or anonymous accounts is banned. In addition, spreading deepfakes that discredit candidates, parties, or the election process itself can now lead to criminal liability.

Providus researcher Iveta Kažoka is concerned that the gaps they discovered could become critical before next year’s parliamentary elections in Latvia. It urges the DSA coordinator in Latvia to take a more proactive stance by publishing a simple “user action plan” that explains where citizens should turn if platforms ignore their reports.

The platforms themselves offered Re:Baltica mixed responses to the findings. Meta stated that the posts flagged by researchers did not violate its community standards and added that independent fact-checkers can review any identified misinformation. In Latvia’s case, it would be Re:Baltica.

TikTok responded that it had reviewed the reports submitted by Providus and removed three out of five posts for violating its community guidelines. However, the company did not explain why this was not done immediately. 

Telegram, unusually for the company, responded saying that it had fixed a technical error in its DSA reporting form, which “now works properly.” It also claimed that “Telegram is not an effective platform for misinformation, because it does not use algorithms to promote sensational content to unwitting users.”

Re:Baltica may have received these unusually detailed responses because, by late October, the European Commission launched formal proceedings against Meta and TikTok for possible breaches of the DSA. The Commission accuses both companies of failing to provide researchers with access to platform data and of neglecting their obligations to allow users to flag illegal content and appeal moderation decisions. If found in violation, they could face fines of up to six percent of their global revenue. Investigations were also launched last year against X (formerly Twitter), Temu, and AliExpress. So far, none of the platforms has been fined.

Cover photo: Re:Baltica