Meta's Unsuccessful Crackdown: How 4,000 Nudifying Ads Slipped Through the Cracks
Breaking down our latest report, in which we uncovered a major loophole enabling ads for nudifying apps to reach hundreds of thousands of Meta users
When Meta announced new measures in June to prevent AI "nudifying" applications from advertising on its platforms, the company claimed it had "developed new technology specifically designed to identify these types of ads." The announcement came after sustained reporting exposed over 10,000 Meta ads promoting tools that allow users to create deepfake sexual abuse images by digitally removing clothing from photos of real people.
But Meta's promised crackdown was not all it cracked up to be. Together with Indicator Media, we uncovered over 4,000 additional ads for nudifying apps running on Instagram and Facebook, reaching hundreds of thousands of users, including minors, since June 2025. Worse yet, Meta's implementation of European Union transparency requirements has enabled advertisers to exploit an enforcement loophole, effectively defeating legal requirements designed to give citizens meaningful transparency into who funds and places the ads they see.
This is another case study in how major technology companies systematically allow bad actors to undermine regulatory frameworks while maintaining the pretense of compliance, all while facilitating the spread of tools designed to sexually exploit women and girls.
A Big Transparency Loophole
We initially found these nudifying apps by chance; while scrolling through Instagram, I encountered repeated ads for nudifying apps that appeared at random—alongside ads for other kinds of content violating Meta’s terms of service, including pornography and crypto scams. What stuck out about the ads for nudifying apps, however, was what we found while examining them in the Meta Ad Library afterwards.
Under the EU's Digital Services Act, platforms like Meta must disclose who pays for and benefits from advertisements shown to European users. This requirement exists precisely to prevent malicious actors from operating in the shadows, whether they're spreading disinformation, running scams, or—as in this case—promoting tools for image-based sexual abuse.
Yet Meta's ad-buying system contains a glaring loophole that renders these protections meaningless. Advertisers can simply enter gibberish text or numeric strings (like “AB” or “111”) as their business name, and Meta's system accepts it without question. The result is a transparency regime that exists only on paper, providing European regulators and citizens with functionally useless information about who is targeting them with harmful content.
The contrast is instructive: when we examined Meta's ad disclosures for the same nudifying apps in Singapore and Taiwan, the company provided accurate business information. This suggests that Meta possesses both the technical capability and the business data necessary to implement meaningful transparency—but chooses not to do so in the European market.
Platform Accountability
Meta's handling of nudifying app ads reveals the hollowness of Big Tech's self-regulatory promises. After facing public pressure and potential legal liability, the company announced new enforcement measures with great fanfare in June. But the continued proliferation of these ads demonstrates that Meta's "crackdown" left much to be desires. Only after Indicator reached out to Meta as for comment did Meta take action against the 4,000 ads we found. We at ASP had initially reported several dozen via in-app channels on Instagram, with mixed results. In one case, we reported two ads for the same nudifying app, of which Meta removed only one. It simply shouldn't take special media outreach to take down these kinds of ads which objectively violate Meta’s terms of service.
Meta's failure to action these ads is particularly striking given the company's technological resources. Meta employs thousands of content moderators and has invested billions in automated detection systems. Yet somehow, over 4,000 ads for applications that exist solely to create non-consensual intimate imagery managed to evade detection after the company specifically claimed to have developed technology to identify them.
The Real-World Stakes
Behind these numbers lies a technology designed to violate women's privacy and bodily autonomy. AI nudifying applications don't create abstract harm.They enable harassment, blackmail, and sexual exploitation of real people. When these tools are advertised to millions of users, including teenagers, the potential for abuse multiplies exponentially.
The European Commission has recognized this reality, investigating whether these applications violate EU privacy laws and constitute illegal forms of image-based sexual abuse. But regulatory action becomes nearly impossible when platforms enable advertisers to hide their identities behind fake names and shell companies.
Meta's transparency failures allow bad actors to violate the Digital Services Act andactively undermine law enforcement efforts to combat the criminal enterprises behind these applications. When investigators can't identify who is paying for ads, they can't trace funding sources, map distribution networks, or hold bad actors accountable.
Regulatory Reckoning
The European Commission now faces a critical test of the Digital Services Act's effectiveness. Meta's violations are clear, documented, and ongoing. The company has systematically failed to implement meaningful transparency measures while continuing to profit from ads promoting tools designed to sexually exploit women and girls.
If regulators allow Meta to maintain this charade of compliance while reaping advertising revenue from image-based sexual abuse, they will signal to every major platform that the DSA's requirements are optional suggestions rather than binding legal obligations.
The solution is straightforward: Meta must verify advertiser identities before accepting payments, implement meaningful transparency measures that actually identify who is behind each ad, and face financial penalties for continued violations. The company clearly possesses the technical capability to implement these measures, as evidenced by their more accurate disclosures in other markets.
Our work together with Indicator reveals how platform "self-regulation" operates in practice—not as a genuine effort to address harmful content, but as an elaborate performance designed to avoid real accountability while maintaining profitable relationships with bad actors. European regulators have the tools to end this charade; for the sake of women and girls around the world, we hope they’ll use them.
Thanks for being a subscriber and tuning in for another week of exclusive content on our Substack! We’re excited that our next workshop is only two days away. On Sunday, September 21, we are hosting “OSINT For Everyone” online from 1:00-2:30pm EDT. If you’re curious about upping your digital sleuthing skills, join us! For our paid subscribers, you’ll receive 10% off if you’d like to join us. Founding Members, you’ll get 25% off if you sign up now ;)



