The AI propaganda bots are here — but did they matter?

Hi! First, thanks to everyone who’s subscribed recently. For those who don’t follow me on socials, I did Meet the Press two weeks ago to talk about election integrity in 2024 — if you found me that way, I’m glad you’re here. It was a very well-done show. The highlight for me was four Secretaries of State (2D, 2R) laying out the challenges as actual election administrators see them. Worth a watch

I am not a frequent newsletter-er, but in the last week some interesting news broke at the intersection of generative AI and online influence operations:

 [1] Meta released a report announcing a takedown of fake networks on Facebook and Instagram linked to Bangladesh, Croatia, China, Iran, and an Israeli firm named STOIC;  

[2] OpenAI released a report saying that Russia, China, and Iran-linked actors, and also Israeli firm STOIC, have been using their tools to generate content for influence operations; and 

[3] a NYT story linked STOIC’s actions to a contract from the government of Israel.

I thought it could be useful to connect some dots for folks who'd like to better understand detection and responses to IO.

For a few years now, people who study influence operations have speculated about when (not if) state propagandists who want to manipulate target audiences would start using generative AI to level up their game. Being able to generate text makes it cheaper and easier not only to run fake accounts, but to run better, and more interactive, fake accounts. My team has seen the presence of AI-generated accounts acting as reply-guys on Twitter (dba X), but what we can’t do as outside researchers is say concretely, “This is a state actor and it has generated this post.” So whether state actors had adopted the technology, or if it was still primarily spammers, was an open question.

Therefore, it was nice to see OpenAI publicly acknowledge that this is happening, and divulge what it looks like. Meanwhile, in its report, Meta not only put out its usually quarterly information dump but specifically called for threat intelligence sharing between industry, government, and researchers (namechecking persistent Russian actors with whom it plays whac-a-mole). Similarly to cybersecurity threat awareness, signal sharing between social media and AI companies, and potentially outsiders, can help disrupt manipulative operations more quickly, before they have time to amass any real influence or reach. Putting out threat intelligence reports with account and domain details, as Meta does, can help less-resourced companies find threats on their own platforms (or not). 

One thing I appreciated about these reports coming out in quick succession is that it highlights that there are phases of the propaganda process: in this case, creation, and distribution. There are a lot of panicky takes that assume that generative AI tools are a disaster for elections simply because bad actors can do what OpenAI described: churn out a bunch of fake posts very easily. But the Meta report reinforces the piece that often gets neglected: then the content has to get pickup. And in the case studies common to both reports- including one about Israeli political marketing firm STOIC - real people largely did not engage. In another case OpenAI described, targeted audiences actually mocked content as being AI-generated. Without uptake from real people who like, share,  and become unwitting distributors themselves, there is little capacity for impact.

 To briefly connect the dots to story number three: the NYT piece about STOIC, the mercenary PR firm. I always like reading analyses that go deep into who was behind an operation, what it cost, what it set out to accomplish. In this case, the $2 million operation targeted members of Congress in the United States, and fake accounts reached out to comment on antisemitism on college campuses, for example. While it is interesting to see a government targeting a geopolitical ally in this way, my feeling on the Israel case is that this was a relatively standard run-of-the-mill effort. Online influence operations are table stakes now, part of comprehensive state propaganda strategies. Mercenary PR firms like this have run these operations for years now, because hiring outsiders gives the government plausible deniability if the firm is sloppy and gets caught (as it did, though the NYT found that Israel’s Ministry of Diaspora Affairs had ordered the operation.) China and Russia, among others, target specific American politicians, and have bots and trolls engage them in the comments. What was also interesting about the details in the NYT was that this well-paid private firm did not seem to be particularly adept at any aspect of what they were hired to do.

Anyway, bad actors are now confirmed to be using AI and social media in an attempt to manipulate the discourse — but in these first examples, it wasn’t effective.

In other news, my book is now five days from release and The Bulwark reviewed it today (please consider preordering if you find my work useful…it really helps!) We just put up a new analysis of some Chinese and Russian activities on the SIO blog that readers might find interesting. And I did an interview with the Conspirituality podcast, which is always thoughtful and entertaining.

Thanks for the continued support, and if there’s something you think I should cover in this newsletter, please drop a line and let me know.