The Houdini Ads- how and why political ads are still slipping through the filters of Google and Meta in Hungary
Author: Péter Krekó, Csaba Molnár; Political Capital Institute, Hungarian Digital Media Observatory
There is a country in the European Union that achieved a remarkable distinction ahead of the 2024 European Parliament elections: it became the largest political advertiser on Meta. Despite having a population of less than 10 million, there were more money spent on political advertisements than in Germany. The biggest spender was the governing party, which invested more in social media advertising – than any other political party across the EU during the 2024 EP campaign. In the same country, in first nine months of this year, before the ban took effect, government-affiliated actors — including government agencies, Fidesz politicians, government-organized media, and proxy organizations — paid for 87% of the HUF 4.1 billion (around EUR 10.6 million) spent on Google and Meta ads. Opposition parties spent only a small amount in comparison. Even though this year was an election year in Czechia, spending on political advertisements was eight times as much than in the Czech Republic. As we discovered in an earlier research project, the large volume of political advertisements flooded social media with sponsored disinformation and hostile narratives.
This country is Hungary. The governing party, Fidesz, which has been in power continuously for more than one and a half decade, is currently trailing its main rival by more than 10 percentage points in public opinion polls (https://www.politico.eu/europe-poll-of-polls/hungary/). Hungary will hold its most consequential parliamentary elections in sixteen years in 12 April 2026. In such a competitive environment, reaching unaffiliated and undecided voters is crucial for all political actors. However, the loss of political advertising tools presents a particularly acute challenge for the ruling party – one that has long championed and systematically exploited political advertising, both domestically and at the EU level.
Consider how constrained the Hungarian government may feel if it cannot deploy its vast resource advantage during the campaign because of changing platform policies – as even before the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation came into full force, both Google and Meta had suspended political advertising on their platforms. Officially at least.
In practice, however, the transition has been far from smooth. A recent study by the Political Capital Institute – a member of the Hungarian Digital Media Observatory – found that political advertisements are still running on the platforms of both tech giants. These include seemingly harmless cartoons as well as hardcore deepfakes placed by Hungarian advertisers.
Many of these ads are sponsored by government actors: the ruling party, individual government politicians, and government-organized NGOs (GONGOs). Rather than being clearly labeled as political content, they are often (re)classified under unrelated categories such as Business, Finance, Autos, or Internet.
Adaptation strategies
In Hungary, we found that the main pro-governmental disinformation actors has a fourfold strategy to adapt to this new environment. They try to stimulate engagement- both organic and pseudo-organic. They escalate their rhetoric even more – outrage travels further than nuance, and in a crowded information environment, provocation becomes a rational communication strategy. They invest in loyal influencers who have their own- often non-political audience. But also- they try to circumvent ad bans and “slipping through” with advertisements. Actors test platform detection systems, exploit inconsistencies, and use technical or semantic loopholes to get content approved.
After the Ban: Fewer Political Ads – Smarter Manipulation
When Google and Meta phased out political advertising in the EU in autumn 2025, the immediate effect was visible: most major political advertisers disappeared. Parties, politicians and government-aligned proxy actors largely vanished from the official ad libraries. The overall volume of political ads dropped. But they did not disappear.
For example, In October 2025, the Hungarian government and the Prime Minister’s Cabinet Office ran ads promoting the so-called National Consultation – a long-standing push-polling tool used to manufacture consent and mobilize voters. These ads portrayed political opponents as “puppets of the West” and Brussels and warned of looming tax increases. Despite their overtly political content, both platforms classified them as non-political. Some Meta-approved versions appeared carefully designed to evade filters, for example by omitting figures such as Ursula von der Leyen, whose image appeared on offline billboards.
More striking was the activity of the government-organized proxy group National Resistance Movement (NEM). One of its most revealing experiments was an AI-generated cartoon styled as a children’s fairy tale. In it, a fox warns animals living along the Tisza River that the opposition would impose a costly property tax. For Hungarian audiences, the reference to the TISZA party was obvious. Meta’s system let it pass.
NEM then industrialized AI-based smear campaigns targeting opposition leader Péter Magyar. One widely promoted deepfake depicted him in a straitjacket, speaking incoherently. Between 10 October 2025 and 15 February 2026, NEM launched 94 ads promoting nine AI-generated videos. By mid-February, 89 ads – 95 percent – had been labelled as political by Meta, but only after at least HUF 56 million (EUR 148,360) had been spent on them. The damage was already done. The four most popular videos reached 28, 17, 10, 10 million views, respectively. Comparable videos that were not advertised during this period reached between 14,000 and 57,000 views. Paid promotion made the difference between marginal visibility and mass exposure.
The inconsistency of enforcement is equally revealing. Some ads were removed within an hour, others ran for a week. One identical video was promoted in 30 nearly identical versions – at some point 21 were later classified as political, nine were not. In late November, NEM launched another AI smear video; none of its six ad versions were removed, and all ran their full campaign cycle. Meta therefore conducts retrospective checks even after ads have been launched – and in some cases even after they have ended. This enhances transparency, as spending becomes visible for ads that are labelled as political. At the same time, retroactively labelling campaigns that have already concluded does not meaningfully affect their reach or political impact. The system only has a substantive limiting effect if the review is carried out swiftly and takes place in the early phase of the active campaign period. Google classified similar recruitment ads under categories such as Autos and Vehicles or Consumer Electronics. None were labeled political.
Meanwhile, mobilization shifted toward recruitment. Fidesz built new online networks, including the nearly 60,000-strong “Fight Club” and the “Digital Civic Circles.” After the ban, they ran more than 4,000 ads on Meta to encourage users to join or interact with these groups
The self-imposed „ban” reduced the visible volume of political ads. Yet it also incentivized experimentation, proxy actors and AI-driven manipulation. Political advertising did not disappear. It adapted – and in many cases became harder to detect, archive and regulate.
Holes in the net
Google’s and Meta’s ad filtering systems are intended to detect and restrict political advertising, yet our findings reveal significant weaknesses. A central problem is misclassification. Meta often fails to identify political ads altogether, allowing them to run. Google, meanwhile, frequently labels political content under unrelated commercial categories such as Arts and Entertainment, Finance, or Internet and Telecom, effectively masking political ads as non-political material.
Beyond simple errors, Google’s categorization is inconsistent: identical ads are often assigned different labels. This unpredictability undermines transparency, complicates monitoring, and weakens accountability. Reclassification practices further obscure the picture. Some ads initially labeled “Political” were later reassigned to non-political categories before Google’s self-imposed ad ban, without clear explanation. The result resembles systematic mislabeling.
Meta’s review process shows similar inconsistencies. While some versions of political ads are removed quickly, nearly identical versions often remain online for days. AI-generated videos pose an additional challenge, as Meta’s current systems appear unable to reliably detect and manage such content.
Overall, these shortcomings raise serious doubts about the reliability of both platforms’ filtering systems. The causes likely include algorithmic flaws, insufficient human oversight, limited technical investment, and reduced attention to smaller language markets such as Hungary.
Lessons to be learnt
When major platforms restrict or ban political ads, the expectation is that manipulation will decline. In reality, the picture is far more complex. There are two main lessons to be learnt from the Hungarian case.
First, Hungary highlights the sheer difficulty of regulating political advertising and enforcing compliance. Political actors experiment constantly with new formats, intermediaries, and gray zones. The boundary between political and non-political content is often blurred. Issue-based campaigns, proxy pages, deepfakes and “informational” content can easily function as de facto political advertising while avoiding formal classification. This makes enforcement reactive and uneven, especially in smaller-language markets where platforms invest fewer resources in oversight.
Second, the Hungarian case offers a playbook – or at least a set of benchmarks – for disinformation actors elsewhere in the EU. What works in one member state is quickly observed and adapted in another. Techniques developed to evade platform restrictions do not remain local innovations. Instead, they travel. From rebranding political messaging as civic activism to outsourcing communication to loosely connected networks, Hungarian actors have demonstrated how to operate within – and around – platform rules. In this sense, Hungary has functioned not just as a laboratory for post-truth, but a laboratory of regulatory circumvention as well.
Unless tech platforms take their self-imposed ban policies more seriously and invest more in technical systems, human oversight, and enforcement capacities, there is a real risk that the Hungarian election campaign and other upcoming campaigns will be influenced by a significant number of illicit political ads.