No Clapping Allowed: A Social Media Free Speech Debate Without the Usual Theater

0:00
/5:23

Last month I participated in an event called “Truth in Turmoil” — a debate on free expression and shared truth in the digital age, hosted by American Public Square. I was joined by Jonathan Turley, George Washington University law professor and Fox commentator, and Laura Clark Fey, privacy lawyer, with moderation by Reshad Staitieh of the Conflict Resolution Center. 

American Public Square events have a very interesting set of constraints: conversations are in front of a live audience, but they aren’t allowed to cheer or jeer. There are “civility bells” in the room, which any panelist or audience member can ring if they believe a speaker is becoming uncivil, resorting to ad hominem, etc. They have a team of live fact-checkers present—that evening, from the local law school—and a “roving reporter” who collects questions from the audience. Anyone, speaker or audience member, can request a fact-check. So their events are very interactive, stay rooted in reality, and stay civil. I found the format very interesting, and enjoyable. 

The setting—the National World War I Museum in Kansas City—was a good reminder that arguments about “dangerous speech” and propaganda aren’t new. But we did get into the specific complexities of today’s systems: bots, AI, algorithmic amplification, privately-owned platforms that feel like public infrastructure, and so on.

I posted a “highlight reel” above but it’s hard to excerpt a 90-min debate – especially on Ghost, which restricts embedded video size far more than Substack etc. (The full video is here on YouTube). The participants discussed: 

  • Whether content moderation equals censorship (it doesn't)
  • Whether fact-checking is censorship
  • Whether the “public square” is still public
  • Who has First Amendment rights on private platforms
  • Why people don't flock to "free speech absolutist" platforms 
  • Whether the US government ran a censorship cabal (nope)
  • What role bots and AI are playing in online speech

As expected, the sharpest disagreement were between Professor Turley and me. We don’t agree on much, though we disagreed respectfully. Turley argued that platforms should work like Ma Bell—neutral conduits that don't make editorial choices. But that’s just not how platforms work. They are making editorial choices constantly…it is their entire business. If you want an environment free of algorithmic choices and content moderation, well, the market’s already got you covered: you can head to 8chan right now. But most people don't, and that tells us a lot about why platforms make the choices they do.

Laura Clark Fey is not normally a content moderation/censorship brawler (she is a normal person!) and she focused on an issue that often gets lost in these debates: the power of the companies themselves. When algorithms decide who gets amplified (or who gets buried) in a feed designed to maximize attention, she asked, are we still talking about “free speech” in the same sense? How do we counter-speak against lies effectively if bots can artificially flood the zone? 

These tensions—platforms as private companies optimizing for engagement, users wanting protection from “bad stuff” but not agreeing on what that means—came up again and again. A fact-check during the event confirmed what we already knew: most users do want some guardrails. The question is which ones.

Even with live fact-checkers in the room, some claims inevitably went unaddressed in real time (for example, they didn’t get to address the take that ‘Donald Trump was banned for election falsehoods’ – no, he was banned after January 6th because of concern about further incitement of violence.) This is always one of the constraints of debate: moderators move the conversation forward, and things get left hanging. But we managed to re-attribute things that happened during the Trump administration to Trump, which is important where this topic is concerned. Government concern about addressing disinformation and “malinformation” (a term I’ve long hated…just say “propaganda”) was attributed to Biden — but the policies began in the Trump era. The Murthy v. Missouri case was tossed for standing because there was no evidence that the plaintiffs had ever been mentioned by the government defendants, let alone that government coercion had led to a moderation decision harming them. The jawboning question is a critically important one — Prof. Turley and I actually agree on that — but when we are litigating free speech on social media, the government and the platforms do also have their own speech rights. We need to be rooting our debates in evidence-based reality.  

The organizers asked each of us to lay out a “So what? Now what?” argument at the end of the debate. People who support free speech online, in my opinion, should be advocating for platform transparency; for middleware that gives users more control; for interoperability to help facilitate more social media platforms with diverse rules. They should have some kind of positive vision for what an optimal system looks like — since these are actually private companies and their design has significant impact. 

Public debates and conversations matter. Getting these disagreements out in the open, particularly with fact-checking (which is not censorship!), helps us maintain shared reality and arrive at a better future. Even when it's frustrating…especially when it's frustrating.

Watch the full event here.