If you give a mouse a cookie...
Hello! Happy new year! It was a busy week in tech policy land. Yesterday was the SCOTUS hearing on TikTok (interesting legal takes). Also, Some Personal News: I've joined Lawfare as an editor and will be doing a lot more posts and podcasts. I'd kind of like to do some debate-y podcasts? Short video content? If there's something you think I should do, shoot me a note.
In this newsletter I want to talk about Tuesday’s announcement from Mark Zuckerberg – a huge pendulum swing in Meta’s content moderation policies, detailed in a blog post titled, "More Speech and Fewer Mistakes". Much of it was a pretty transparent response to ref-working from the right, so I want to talk about the specific changes, nuance within, and then the longstanding issue of platforms shifting in response to ref-working from politicians and how we users can improve our own experiences here. So: what changed?
- Ending Third-Party Fact-checking: Zuckerberg announced Meta is getting rid of its independent third-party fact-checking program in the United States (it will continue globally, for now). He justified this by claiming the fact-checkers were “too politically biased,” offering no examples.
- Introducing 'Community Notes': Replacing fact-checking is “Community Notes,” a system similar to X (formerly Twitter) and YouTube, where users add context to posts. Notes are written by everyday users, then vetted by other users through a consensus process (a bridging based algorithm) requiring agreement across ideological lines before the note is made public.
- Reducing Auto-Moderation/Adjusting Enforcement: Meta is scaling back its reliance on auto mod systems, except for illegal or high-severity content. Users will now report objectionable content for review. Many people have had bad interactions with auto mods, which scan the platform for policy violations and sometimes either aggressively or foolishly misconstrue posts and take them down. Yet the platform has long proudly pointed to this system, noting that it catches over 95% of bad content before humans ever get to it.
- Bringing back civic content: political content is now legal again on Facebook! After January 6, 2021, political posts were downranked. Recently, users—especially Harris supporters on Meta's Threads—complained about political accounts being largely hidden. Users can now choose minimal, moderate, or abundant political content.
- Relocating Moderation Teams: Some of the US based content review teams will apparently move from California to Texas because, Zack said, Meta will "build trust" by locating employees in places where there is "less concern about the bias of our teams." (Former employees took to social media to note that many of these teams, in fact, have been in Texas for years)
- Collaboration with Government: Zuck expressed a desire to work with the Trump administration to counter international censorship pressures. He mentioned Latin America, China, and Europe.
- You Can Now Call LGBTQ People "Mentally Ill": yes, you read that right. Content policies related to immigration and gender - Zuckerberg called out these two areas - have changed. The change log of the new "hateful content" policy is here. I don't work on hate speech, but in the past, platforms tried to balance free expression - allowing arguments and discussions about various "culture war" topics while - with minimizing harassment. This meant drawing the line at bullying individuals, or dehumanizing specific people or groups. That seems to have changed, it's what I found most surprising and disturbing. (On Friday, a few days after Tuesday's speech, Meta apparently also removed LGBTQ themes from Messenger and made changes to its DEI policies and its bathrooms.)
This isn’t just about policy; it’s about appeasement. As I read it, the speech had a primary audience of one: the incoming President, who'd previously threatened to jail him. When asked in a press conference if the new policies were a direct response to "the threats that you have made to him," Trump was quite blunt: "Probably. Yeah, probably."
It's a remarkable statement. I've personally been embroiled in several contrived right-wing manufactroversies over the last two years, spun by crank media and election-denying Congressmen like Jim Jordan. They allege that a group of academics – who are not government actors, and wield no regulatory power – mercilessly pressured tech companies into changing their policies during the 2020 election. There was also a court case, Murthy v. Missouri, which alleged that the Biden administration pressured Meta into taking content down (Court tossed it for standing, noting the weak evidence, but did not opine on the central issue).
The mere allegation of jawboning by a Democratic administration sparks a sweeping Congressional investigation and numerous lawsuits. The incoming-and-former President of the United States casually acknowledging that a private company had "probably" crafted a suite of policies desired by his party in response to "threats" is just another Tuesday.
The policy changes themselves are a mixed bag: increasing civic content and minimizing false positives increases free expression. Scaling back auto-moderation addresses frustrations with imprecise takedowns and over-enforcement, though it shifts the burden to users, who are often skeptical that reports will be acted upon. Allegations of bias in the fact-checking program don't seem rooted in fact; there have certainly been mistakes, such as during COVID when information was evolving rapidly. Research suggests the impact of fact-checking is positive, but mixed, though analysis rarely accounts for ROI. Community Notes is great conceptually; I've participated since 2022 and am a fan. However, in practice (on X) it does struggle with scale, speed, and sometimes political pressure. Notes can be slow, or fail to reach consensus (so they simply never appear). Some involve issues that benefit from journalistic skills or deep expertise, like calling sources or analyzing images.
Fact-checking and peer-driven systems like Notes should complement each other, particularly as Notes matures. But that assumes these changes are being made in good faith—and the tone and language throughout the announcement suggest that they're extremely political. I wish we could just be normal about evaluating the evidence for what works and what doesn't, but that's a fantasy in this deeply polarized time.
Meta didn’t kill the fact-checking program because of its limitations—it killed it because it became politically toxic. For as long as prominent politico tweets have been fact-checked, the MAGA right has worked to delegitimize fact-checking as an anti-conservative weapon of mainstream media. They understand that platforms shape public opinion and fight to maximize their influence at every turn - by working the referees. One means of shaping public opinion is by repeating terms like “censorship” and “bias” in association with "moderation" ad nauseum; they are now thought-terminating clichés.
Labeling isn’t "censorship" — it’s speech. It is adding more speech via contextualizing information on content that stays up. It informs the viewer, who remains perfectly free to ignore it. Community Notes will do the same thing! And, ironically, many Notes will cite the same professional fact-checking sites that Meta just abandoned - for as long as they have the financial capacity to exist.
Platform policies are an expression of platform values. That it is now ok to tell a gay teenager that he's mentally ill is Zuckerberg's to own. It's not something that can be passed off as a calculating business call; most people across the political spectrum don't want rampant bullying, harassment, or even misinformation on their social platforms. It may be that Meta anticipates that the newly-empowered political elite will post derogatory commentary about immigrants or trans people on its platform fairly often, and doesn't want to deal with "censorship" complaints if it actions their posts in accordance with its policies. Changing the policies, therefore, solved Meta's short-term problem, but make no mistake: the rules that still exist will be declared illegitimate totalitarian censorship soon enough by the mice when they're done with their cookie. What these changes signal is a retreat from responsibility to users in favor of preemptively appeasing vocal political critics.
This retreat isn’t accidental. It’s the culmination of years of conservatives “working the referees": pressuring platforms into changing or enforcing the rules to maximize their advantage. Ref-working has a storied history. For those new to the content moderation wars, I coincidentally published an essay on Monday over at Noema, about why ref-working matters so very much. It’s the If You Give a Mouse a Cookie model of governance: make one concession to quiet the criticism, and suddenly you’re constantly rewriting your the rules to appease your loudest detractors.
Users get buffeted by platform policy decisions; we have little agency or control over our feeds. We are at the mercy of opaque algorithmic curators and shifting moderation policies on centralized social platforms. A global rule-book struggles to accommodate all cultural nuances and preferences – in 2018, for instance, as platforms were strengthening hate speech moderation rules (at times in response to civil society ref-working!), some MAGA folks chafed and Parler emerged to serve them. After Donald Trump got booted from mainstream social media platforms for inciting the January 6 riot, he established Truth Social. These platforms promised minimal moderation, catering to MAGA sensibilities. When Elon took over X, the dynamic flipped: the disillusioned left migrated to Mastodon, Threads, and Bluesky.
Exit versus voice: you can stay on a platform that disappoints you and push for change...but now you can also go elsewhere. If you hate Meta's recent changes, you can decamp to Bluesky, Reddit, or TikTok (for now?). Decentralized platforms are still small, but they give us a degree of control we otherwise don't have. I explored these tradeoffs in the Noema essay - decentralized platforms still struggle to moderate, in different ways. But users can evaluate their options and make an informed decision about what works for them.
Anyway, please give the essay a read if you're interested in online governance and the future of social media – The Great Decentralization is here. And on the Meta changes – in addition to the Lawfare chat I linked up top, I joined PBS (5 min) and Christianity Today as a guest on their podcast. Their focus on building and maintaining strong communities of support online was refreshing, and I wish it were easier to focus more on collective well-being when we engage on social media.