Writing

Writing

I regularly write about influence operations, trust & safety challenges, adversarial abuse, and technology policy in outlets such as The Atlantic, Noema, and Wired.  My research, writing, talks, and data visualizations have been featured by media outlets including The Wall Street Journal, New York Times, Washington Post, CNN, CNBC, Bloomberg, Yale Review, Fast Company, Politico, TechCrunch, Wired, Slate, Forbes, Buzzfeed, The Economist, Journal of Commerce, and more.

I also publish academic papers and commentary in peer-reviewed journals, primarily focused on propaganda, rumors, and disinformation, but also on various adversarial trust & safety issues. Those can be found on my Google Scholar, ORCID, or Stanford profiles.

Here are some selected essays and papers.

The New Media Goliaths | NOEMA
The internet has allowed independent creators to thrive, finding niche audiences for everything from nudes to salad recipes. But it’s also spawned silos that incentivize propaganda.
How Online Mobs Act Like Flocks Of Birds | NOEMA
A growing body of research suggests human behavior on social media is strikingly similar to collective behavior in nature.
Free Speech Is Not the Same As Free Reach
Bad faith politicking about the way search algorithms work makes it harder for tech companies to solve the real problems.
The Feed
A series by Renee DiResta on technology in politics, influence, propaganda, and such.
It’s Not Misinformation. It’s Amplified Propaganda.
You don’t need fake accounts to spread ampliganda online. Real people will happily do it.
Institutional Authority Has Vanished. Wikipedia Points to the Answer.
The crowdsourced reference site can teach the CDC how to communicate in an era of rumors and shifting information.
Anti-vaxxers Think This Is Their Moment
Society’s well-being depends on how well public-health officials and average internet users combat misinformation.
The Supply of Disinformation Will Soon Be Infinite
Disinformation campaigns used to require a lot of human effort, but artificial intelligence will take them to a whole new level.
Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations
Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content. For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations. This report assesses how language models might change influence operations in the future, and what steps can be taken to mitigate this threat. We lay out possible changes to the actors, behaviors, and content of online influence operations, and provide a framework for stages of the language model-to-influence operations pipeline that mitigations could target (model construction, model access, content dissemination, and belief formation). While no reasonable mitigation can be expected to fully prevent the threat of AI-enabled influence operations, a combination of multiple mitigations may make an important difference.
Social Media, Freedom of Speech, and the Future of our Democracy
One of the most fiercely debated issues of this era is what to do about “bad” speech-hate speech, disinformation and propaganda campaigns, and incitement of violence-on the internet, and in particular speech on social media platforms such as Facebook and Twitter. In Social Media, Freedom of Speech, and the Future of our Democracy, Lee C. Bollinger and Geoffrey R. Stone have gathered an eminent cast of contributors – including Hillary Clinton, Amy Klobuchar, Sheldon Whitehouse, Newt Minow, Cass Sunstein, Jack Balkin, Emily Bazelon, and others – to explore the various dimensions of this problem in the American context. They stress how difficult it is to develop remedies given that some of these forms of “bad” speech are ordinarily protected by the First Amendment. Bollinger and Stone argue that it is important to remember that the last time we encountered major new communications technology-television and radio-we established a federal agency to provide oversight and to issue regulations to protect and promote “the public interest.” Featuring a variety of perspectives from some of America’s leading experts on this hotly contested issue, this volume offers new insights for the future of free speech in the social media era.
The Tactics & Tropes of the Internet Research Agency
Upon request by the United States Senate Select Committee on Intelligence (SSCI), New Knowledge reviewed an expansive data set of social media posts and metadata provided to SSCI by Facebook, Twitter, and Alphabet, plus a set of related data from additional platforms. The data sets were provided by the three primary platforms to serve as evidence for an investigation into the Internet Research Agency (IRA) influence operations. The organic post content in this data set has never previously been seen by the public. Our report quantifies and contextualizes Internet Research Agency (IRA) influence operations targeting American citizens from 2014 through 2017, and articulates the significance of this long-running and broad influence operation. It includes an overview of Russian influence operations, a collection of summary statistics, and a set of key takeaways that are then discussed in detail later in the document. The document includes links to full data visualizations, hosted online, that permit the reader to explore facets of the IRA-created manipulation ecosystem. Finally, we share our concluding notes and recommendations. We also provide a comprehensive slide deck accommodating a wide array of selected images directly from the data set illustrating our observations, and, as an appendix, a comprehensive summary of relevant statistics related to the data set. Broadly, Russian interference in the U.S. Presidential Election of 2016 took three distinct forms, one of which is within the scope of our analysis: ... 3. A sweeping and sustained social influence operation consisting of various coordinated disinformation tactics aimed directly at US citizens, designed to exert political influence and exacerbate social divisions in US culture. This last form of interference, a multi-year coordinated disinformation effort conducted by the Russian state-supported Internet Research Agency (IRA), is the topic of this analysis.
Telling China’s Story: The Chinese Communist Party’s Campaign To Shape Global Narratives
Well-resourced countries have demonstrated sophisticated abilities to carry out influence operations in both traditional and social media ecosystems simultaneously. Russia, China, Iran, and a swath of other nation-states control media properties with significant audiences, often with reach far beyond their borders. They have also been implicated in social media company takedowns of accounts and pages that are manipulative either by virtue of the fake accounts and suspicious domains involved, or by way of coordinated distribution tactics to drive attention to certain content or to create the perception that a particular narrative is extremely popular.
Potemkin Pages & Personas: Assessing GRU Online Operations, 2014-2019