A Comment on the Comment Call: Dissecting the FTC’s Inquiry Into Content Moderation

A Comment on the Comment Call: Dissecting the FTC’s Inquiry Into Content Moderation

This past Wednesday marked the deadline for public comments in the FTC’s Request for Public Comment Regarding Technology Platform Censorship. Calls for comment are always an interesting exercise in democratic participation: you get industry professionals and policy wonks with a stake in the outcome, and very passionate citizens who really want to tell the government what they think.

This was one of those inquiries where the bias leapt from the opening sentence. Andrew Ferguson, newly appointed head of the FTC, isn’t interested in whether tech platforms are unfairly censoring users based on their speech or affiliation — he wants to know how. (The inquiry followed an executive order entitled “Restoring Freedom of Speech and Ending Federal Censorship”; these things are part of a broader playbook.)

Many early responses to the call rightly pointed out that the foundational premise was bullshit. The First Amendment rights on a private platform belong to the platformnot to the user. If Meta wants to ban every person who posts the word “cat,” it can. However, researchers consistently find that there is no evidence of any viewpoint-based intent to censor by social media platforms. Right-leaning accounts frequently outperform left-leaning accounts and have for years. While there are obviously bad individual moderation calls (hUnteR bIdEn’s lApToP), that does not equate to systemic bias.

Nonetheless, loudly and incessantly proclaiming on all channels that one is viciously and thoroughly silenced has proven politically advantageous. To sustain the aggrievement campaign, “censorship” has been expansively redefined to include not just takedowns or bans, but downranking (“soft” censorship), fact-checking, labeling, demonetization, and failure to be algorithmically boosted — i.e., not being handed an audience.

Platforms, however, have a right to do all of those things — as courts have repeatedly reaffirmed. This is inconvenient for Andrew Ferguson.

So the FTC is now floating a different theory: that moderation might constitute an unfair or deceptive business practice. The questions in Ferguson’s call solicited evidence of such unfairness: “Were users induced into joining and investing their time and money in a platform under the expectation of one set of moderation policies, only to have the rules changed from under them? Were users targeted by such adverse actions able to find adequate substitutes in other platforms?”

A lot of influencers now make their living by attracting engagement. If a platform decides that grifters pushing juice as a cure for cancer are Bad, Actually — and chooses to no longer amplify cancer quackery posts or run ads before cancer quackery videos, in accordance with clearly-articulated new policies — cancer-quack influencers could argue their“businesses” are being harmed.

In a public comment call, anyone can submit a comment saying, “My Insta account isn’t growing, they are shadowbanning me.” No evidence required. Head over to the portal, you can read possibly thousands of these. (This isn’t mockery; people genuinely don’t understand how algorithmic curation works. Back in 2018, I occasionally chatted with very small accounts who were sure they’d been shadowbanned. “My friends don’t see all my posts,” was the justification for their theory.)

The FTC’s other major concern is purportedly anti-competitive behavior: are platforms coordinating moderation policies in ways that suppress certain speech—or disadvantage users and rivals? This is an attempt to frame moderation decisions as deceptive business practices, or as collusive behavior that harms consumers…to recast editorial discretion as market manipulation.

I decided to open my comment with a quick summary of an incident that’s become one of the most popular examples of supposed collusion and censorship by social media platforms: moderation of the lab leak hypothesis.

I didn’t follow the lab leak content policy debate closely at the time it was happening. Beyond writing about tinfoil-hat diplomacy out of China (when the Wolf Warriors alleged COVID originated at Ft. Detrick), origin narratives were out of scope for what I was studying at the time. I generally don’t support content takedowns; I don’t think they work, the forbidden knowledge effect is huge. I did think the lab leak theory seemed like a bad candidate for a dedicated policy in the first place.

But ahead of a recent conversation with someone who cared a lot about the topic, I dug through platform blog archives and reviewed the actual policies. And I was surprised at how divergent they’d actually been. Based on media headlines or congressional hearings, most people who follow the topic likely think the platforms all banned it aggressively for an extended time.

In reality, Meta was the only platform that instituted a hard ban — for three months, from Feb 2021 through May 2021, which it initiated following discussions with the WHO, and removed when Biden asked the intelligence services to pursue deeper investigation. YouTube did not have a lab leak policy; it wasn’t moderated there. Twitter didn’t have a policy specific to the topic; its policies required evidence that content would cause harm to justify most serious moderation actions, and it seemingly took down very little writ large. TikTok’s policy had no specific mentions of the topic.

Lab leak moderation on major social media platforms didn’t show coordination, collusion, or even ideological zealotry. In Meta’s case, it reflected the messiness of policy decisions made with concern for reputational risk and uncertainty — but different companies made the call in different ways. In that sense, this example is a case study highlighting the exact opposite of what Ferguson's FTC call is implying.

The danger before us isn’t a cartel of platforms crushing dissent. It’s partisan government actors using anecdotes and misrepresentations to justify regulatory overreach into editorial decision-making protected by the First Amendment.

My comment focused on what regulators should be working to ensure: a market where users have many platforms to choose from in accordance with their values, with transparent policies that are clearly disclosed, fairly enforced, and offer opportunity for redress. To help gauge whether policies are fairly enforced, we need data transparency and researcher access — Congress had an opportunity with the Platform Accountability and Transparency Act. If lawmakers want to know if conservatives are being unfairly suppressed, well, platforms revoking API access makes it pretty hard for anyone to examine that. (I also support interoperability, decentralization, and increasing user choice via middleware - paper hereessay here!)

Many strong tech policy thinkers submitted thorough and interesting comments which I want to highlight, and which cover the 1A issues in play with the FTC call: here’s The Future of Free SpeechPublic KnowledgeFIRETechFreedom, and CDT & EFF’s submissions. Also: special nod to Free Press’s comment, because they call out the unlawful firings of FTC Commissioners Bedoya and Slaughter. If you submitted a policy brief, please leave it in the comments below…

Ironically, during the comment period, the FTC itself moderated submissions to the portal — censoring, among other things, a complaint that TikTok was moderating someone’s posts for using profanity. Apparently, the FTC didn’t want to host his content either.

It is unclear whether Andrew Ferguson’s FTC colluded with TikTok on this judgement call.