Europe’s Internet Law Is a Rorschach Test

Europe’s Internet Law Is a Rorschach Test

Someone is wrong on the internet so I thought I’d send out a newsletter. This time, it’s about European tech regulation…a topic that rightly makes any normal person’s eyes glaze over, but in 2025 is part of the culture wars and trade wars simultaneously. So, it matters. Just this week, the EU decided to shelve an investigation into the business practices of Elon Musk’s X until trade talks were concluded. 

I have issues with parts of Europe’s Digital Services Act, but the far-right effort to reframe it in service to maximizing their transnational political power is something that Americans should understand. BS artists saying “Digital CENSORSHIP Act!” repeatedly on social media is an effective framing device, because repetition works; fighting purported elite bureaucrat villains sells a lot of Substack subs. But it’s also far removed from reality, so I thought I’d write a quick FAQ covering some facts about the regulation, and why it’s become something Trump, Vance, Zuckerberg et al are aligned on. 

For those who like geopolitical longform: here’s an explainer in Lawfare covering the political games the far-right is playing. For those who prefer chatty conversations, here’s a podcast (Ghost would not let me upload a clip, maddeningly) with intermediary liability legal expert Daphne Keller, free speech legal scholar Joan Barata, and Tech Policy Press contributing editor Dean Jackson covering similar material, but in audio.

Now, the FAQ:

What Is the Digital Services Act and Why Is the Trump Administration Talking About It?”

The Digital Services Act (DSA) is the European Union’s rulebook for online intermediaries—services that host, carry, or recommend content, like social media platforms, marketplaces, search engines, and hosting providers. It passed in 2022, but its implementation has rolled out in phases. Its most important provisions come into full effect in 2024 and 2025—particularly for Very Large Online Platforms (VLOPs), those with over 45 million users in the EU.

What does the DSA include?

The DSA is primarily a transparency and accountability law. Some of its key obligations include:

  • Notice-and-Action: Platforms must offer a clear way to flag illegal content, must act on valid notices, and actions taken must be entered into a public database (Article 16)
  • “Trusted flagger” programs: Certain publicly-designated third parties receive priority within platform reporting flows; platforms must report how many notices they receive from these flaggers, and how they handle them (Article 22)
  • Transparency: Requires regular public reports detailing moderation actions, ad targeting systems, and the logic of recommender algorithms (Articles 24 & 42)
  • Systemic-Risk Audits: Very Large Online Platforms must identify and mitigate risks – from disinformation to child safety – and submit to independent audits (Arts. 34-35)
  • Researcher Data Access: Approved scholars get access to data needed to study platform effects; platforms can refuse data if there are security or privacy concerns (Article 40)
  • User-Rights Package: Users can appeal platform moderation decisions and demand timely explanations for removals, demotions, or account suspensions (Arts. 16-22)

Why is this regulation suddenly a topic of conversation in the U.S.?

There’s an underlying political motivation, and then a triggering event that delivered a news hook. 

The political motivation is that the new administration wants to help what it calls its “civilizational allies” (ideologically-aligned political parties in Europe) get a leg up in any way possible. There are different hate speech laws in Europe, some of the far-right parties have not infrequently fallen afoul of hate speech rules on social media, and the now DSA requires that very large online platforms address illegal forms of hate speech. Reframing the DSA as a “censorship law” is an effective way to attack both the social media law and the hate speech laws simultaneously. 

You may have noticed that the US has also just imposed tariffs on Brazil because the Brazilian judiciary is prosecuting former President Jair Bolsonaro, another “civilizational ally” of Trump. Brazil, too, has recently gone to war with X over how it handled the accounts of users the country alleged were involved in plotting a coup. It recently passed a social media law, and there, too, the Trump administration is complaining about tech regulation of American companies. Yet it has not made noises about social media takedown laws or censorship in countries run by authoritarian, censorious allies of the administration who use takedown requests to impact politicians of parties the far-right dislikes. 

Now on to the news hook…

What is the Disinformation Code of Practice?

The Disinformation Code of Practice was originally a self-regulatory framework, developed by the European Commission in collaboration with major tech platforms (particularly Meta), fact-checkers, civil society, and advertisers. On July 1, it converted into a requirement under the DSA. Its stated purpose is to reduce the spread and impact of online disinformation by requiring participating platforms to adopt concrete measures across several areas:

  • Ad transparency: Platforms must clearly label political and issue-based ads, disclose who paid for them, and maintain searchable ad libraries
  • Demonetization of disinformation: Platforms and ad networks are expected to disrupt the revenue streams of clickbait farms and pages operated by foreign influence accounts
  • Addressing inauthentic behavior: Signatories must make best efforts to detect and remove fake accounts and bots
  • Empowering users: Platforms should provide tools for users to understand and contextualize what they see (e.g., through labeling), and give them options to report suspected disinformation
  • Supporting fact-checkers: Signatories must cooperate with independent fact-checkers and ensure that debunked content is downranked or otherwise clearly marked
  • Researcher access: Platforms are required to guarantee data access for vetted researchers
  • Transparency: Platforms are required to publish regular, detailed transparency reports about their enforcement actions, and about the performance of their fact-checking partners, viewable by the public in a transparency center

The platforms each set their own individual policies for how they’ll address these things. There is no notice-and-takedown provision in the Disinformation Code of Practice; the regulators can’t request content takedowns. 

Zooming back out to the whole regulation. So the EU is policing American speech?

No. The DSA regulates companies, and specifically their content policies covering European users – not U.S. citizens unless they’re viewing content within the European market. It can’t force anyone in Kansas to delete a tweet. 

So the EU is taking down European speech?

Again, no. In most cases, it can’t force anyone in Europe to delete (or even hide) a tweet either, unless the content is illegal. Hate speech is illegal in certain European countries; this is a difference between European and American speech culture, and is reflected in those countries’ laws. However, when it comes to things like disinformation, most of the content is legal speech that would not be censored by the DSA. 

So what does the DSA do?

What it primarily does is require disclosure: make an ad library searchable, open research APIs, publish takedown numbers, provide a mechanism for user appeals.

Where disinformation is concerned, the law requires that platforms have policies to define and address it - declaring that they would take down inauthentic accounts, or label content linked to disinformation campaigns, for example, would meet this obligation - and that they publish reports showing that they are adhering to the policies they have defined.  

In some areas, the disclosure requirements provide far more transparency and user protection than American users have: for example, EU users receive replies to appeals within one week. EU users can demand a moderation rationale within 48 hrs, while Americans get a generic form response – if they’re lucky.

Holding aside the hysteria – what are the legitimate critiques?

There are things that many Americans find surprising and distinctly different from how our regulations tend to work. 

  • Core terms in the law — “systemic risk,” “disinformation” — are fuzzy and poorly-defined, which opens up the door to interpretation problems. This is frustrating for platforms and some researchers alike. Vague rules + big fines are counter to how American regulatory environments typically work. 
  • The European Commission wears two hats: political actor *and* top enforcer. That concentration of power worries civil libertarians. In a prominent example of ill-conceived overenforcement, former commissioner Thierry Breton sent a letter to Elon Musk demanding in advance that an upcoming Twitter Spaces interview with Donald Trump not include any “harmful content”. The other commissioners were unaware that Breton was going to send the letter and expressed disapproval of the absurd overreach, but the point was made.
  • Compliance costs are real and can be very onerous (audit teams, risk-assessment consultants, legal fees).

Is it worth defending an imperfect regulation?

“Burdensome,” “imperfect,” and “censorious” are not synonyms. The DSA has areas for improvement, particularly around the lack of precise definitions and high compliance overhead. There are slippery-slope concerns around politicized enforcement, but there are also transparency measures and guardrails in place. Its critics skip the nuance and resort to apocalyptic sloganeering, branding any oversight “totalitarian”.  This conveniently helps platforms lobby for looser rules, or no rules, everywhere. And also, quaintly, I just like arguing from facts and want people to be informed.

There are really great tech policy voices, some of whom I disagree with at times, who have been walking the nuanced-critic line while remaining grounded in reality. Some of them appear in the podcast above: Daphne Keller, Joan Barata, Dean Jackson. Jacob Mchangama. Mike Masnick. Kate Klonick. 

So who’s driving the moral panic about DSA “censorship”?

A coalition of:

  • U.S. populist politicians (Jim Jordan, J.D. Vance, Marco Rubio) who’ve turned “censorship” claims into partisan cudgels for several years now
  • Certain tech executives who once praised transparency — and in the case of Meta, helped write the legislation! — but now see a way to get out of fines and such
  • Culture-war grifters and propagandists who monetize outrage

All have strong political incentives to equate any regulation with tyranny.

Why should an American reader care?

Because many of the transparency and accountability provisions in the DSA, ironically, have appeared in US draft tech regulation before – including regulation authored by Republicans! This fight will shape whether future U.S. transparency bills survive the inevitable “censorship” smear. Platforms in the US have already yanked data access tools simply because they felt like it. What users in the US have is entirely at the whims of the companies. If people want things like a right to appeal bad social media moderation calls, or other user protections, it’s worth paying attention to who is framing this fight.

The DSA is neither the savior of democracy nor an authoritarian gag order. It’s a messy transparency regime that Brussels is still debugging – while incentivized actors weaponize the narrative for political gain. If we conflate regulation with censorship, we lose the chance to tackle some of the real problems that the DSA does address, and hand platforms a ready-made excuse to dodge accountability.