SCOTUS cases on content moderation, and my NYT op-ed

Hello everyone! It feels like a year has happened in the last week. On Monday, I published an op-ed in the New York Times, which laid out how the Stanford Internet Observatory's work was targeted by influencers who misrepresented it as "censorship," leading to congressional investigations – and how material obtained under congressional subpoena somehow wound up in the hands of right-wing partisans who were suing us. The 2024 election approaches – currently with the added uncertainty of the debate debacle – even as the networks that fought to counter false or misleading narratives in 2020 are being dismantled. The op-ed isn't exactly uplifting, but hopefully it makes people – and more importantly, institutions – pay attention to what's happening. This can happen to anyone.

One of the things I didn't discuss in the NYT op-ed, due to word count, was the then-impending Supreme Court decision in Murthy v. Missouri (previously Missouri v. Biden). The Murthy case alleged that government officials had been jawboning private social media platforms to get them to take down "disfavored speech." (In the context of social media content moderation, jawboning refers to government officials attempting to persuade, cajole, or strong-arm private platforms to moderate in certain ways, or to change their policies) The decision in the Murthy case - that the plaintiffs lacked standing - was announced on Wednesday. Since our work got caught up in an early stage of the case, I wanted to highlight the decision in this newsletter.

This case is very important: we should all be able to agree that jawboning is bad, and not a practice that government officials in a democracy should engage in. However, this case also had an incredibly flimsy evidentiary basis. The suit was brought by Missouri and Louisiana, as well as three 'COVID contrarian' doctors, a 'health freedom' activist, and right-wing media outlet Gateway Pundit (which recently seems to have declared bankruptcy after being found liable in a defamation case related to election denial). Although the aggrieved individuals grew very large social media followings by branding themselves as wronged parties targeted by the government, there seemed to be little to no evidence of government officials actually mentioning them to a tech platform. As Justice Barrett's opinion noted,

The Fifth Circuit relied on the District Court’s factual findings, many of which unfortunately appear to be clearly erroneous. The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.” But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. The record it cites says nothing about “censorship requests.” Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. This has nothing to do with COVID–19 misinformation....Some of the evidence of the “increase in censorship” reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency “demand[ed]” that it do so. Finally, the court, echoing the plaintiffs’ proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Instead, on several occasions, Facebook explained that certain content did not qualify for removal under its policies but did qualify for other forms of moderation.

Murthy v. Missouri was therefore quite reasonably remanded due to a lack of standing; it never should have gotten as far as it did in the first place. Facts won out over vibes. The legitimately important question of the limits of jawboning will have to be decided another day.

Another SCOTUS decision on content moderation and the First Amendment worth noting came out today: the finding on the NetChoice cases, which sought to address "whether Florida and Texas can enact laws prohibiting social media platforms from moderating content posted by their users." Justice Kagan's majority opinion reiterated that a platform curating and moderating content is a First Amendment-protected activity. To say that more directly, content moderation is not "censorship" but platforms exercising their own First Amendment rights as they decide what to carry. Someone go tell the Twitter Files guys. Vox explainer here, Washington Post coverage here, and many law/tech blogs and newsletters will likely have their analyses out by tomorrow morning.

I've been trying to post more shortform threads on moderation, rhetoric, and propaganda on Threads, Mastodon, and Bluesky lately, so if you're on there say hi. And I'll share one final thing: my chat with Corey Nathan on the podcast Talkin' Politics and Religion Without Killin' Each Other. As folks in the U.S. head into the July 4 holiday with the family BBQs it entails, here's to everyone making an effort to do exactly that. Happy Independence Day!