Shrimp, slop, spam: what counts as creativity?

First, hi to all the new newsletter subscribers this week! I recently had the pleasure of joining Sam Harris and Kara Swisher on their podcasts to chat about my book, influencers, and how public opinion is shaped. Both are fantastic interviewers with distinct perspectives, and I am happy more folks found this newsletter through those chats.

This week I am once again thinking about Shrimp Jesus. (Also Telegram CEO Pavel Durov, though that story broke after I'd decided to write about this)

If you’re on Facebook or Instagram, you’ve probably seen these surreal images—either because the platform pushed them to you, someone you know shared them as a 'WTF,' or an older relative thought they were real or just cool art. 'Shrimp Jesus' has become a fascinating case study in what captures our attention, how we define 'art' today, and who gets to decide.

I started to frequently encounter weird AI-generated images around Christmas of 2023; they were increasingly being pushed into my Facebook and Instagram feed. My colleague Josh Goldstein and I decided to do a deep dive into ~120 Facebook pages producing content like this – because the pages were getting millions of engagements. We uncovered clusters of interconnected pages, some of which had outright hijacked small business pages and transformed them into AI spam factories. Some of these AI content pages were used for blatant scams, like selling nonexistent products or pets, while others drove users to ad-heavy, low-quality spam domains. We wrote a short paper about how this all worked. It may sound funny that spam is a topic of academic research, but spammers are early adopters of new tech. Financially-motivated actors look for an edge wherever they can get it.

Alongside the pages we could clearly label as "spam" or "scam" were accounts that just churned out content. These accounts also attracted large follower counts (some of which appeared fake) and had high engagement, posting dozens of images daily—from beautiful cabins to ornately carved watermelons to impossible-to-knit sweaters. We speculated they might be building page followings to sell later, and some had subscriptions turned on. We strongly suspected that the engagement was high because platform recommendation engines were boosting them via "unconnected" post curation —my feed remains flooded with suggested posts featuring AI-generated kitchens and similar content nearly nine months later.

Then 404 media and BBC took a step that we couldn't: they went deep on the page owners, even reaching out for comment. 404 Media's excellent deep dive confirmed that one reason why this content – which they termed "AI slop" – is thriving is because it’s so easy and profitable to create. Platforms like Facebook are goldmines for these creators, who earn money not just from publishing content under programs like Facebook's Creator Bonus, but also by monetizing how-to videos on YouTube and selling how-to courses on Gumroad. They're gaming the algorithm for profit while recognizing that it won't last forever – therefore, they're also selling shovels during the gold rush.

While these deep dives into the mechanics of how this kind of content took over the feed are fascinating, there's another question to consider: why do people engage with it so much? The phenomenon of absurdist AI-generated art has now evolved into emotional TikTok videos featuring AI-generated cats in tragic scenarios set to AI-generated mewing covers of pop ballads. These videos also have millions of views, sparking debates on whether this is a novel art form or just more emotionally manipulative AI slop. The distinction is important for platforms, who have to decide what to boost and what to downrank. So I had an interesting chat about it with Aidan Walker, BBC contributor and internet culture expert at Know Your Meme. He reached out to the creators of the AI cat videos, who also offer a Gumroad teaching how to make AI cat content go viral (hint: pathos drives engagement, both human and algorithmic).

The line between art and spam seems like a question of intent: is the creator leveraging machines to create content for humans, or are they creating content because they know other machines will reward it? That can be tough to intuit, and I want to keep seeing the good stuff; there are some truly talented artists out there (I watched this video several times yesterday). But we've also had this debate in the context of text – content farm vampires are not setting out to produce meaningful writing, it's not scandalous to say that, and most people would be surprised if a platform screwed up and started to boost text spam. I’m curious to hear other people's thoughts on how platforms should decide what to boost in the context of images and video —it’s a debate that’s only going to get more relevant as AI continues to influence online culture.

As far as other things I should share: this was another month of being on the road a lot.

  • I went to DEF CON in person for the first time since 2019, which involved meeting internet friends, seeing cool things, and giving an election integrity talk at the Voting Village. The increasing presence of not only hackers but 'hackers who care a lot about policy' at DEF CON makes me happy, because they provide critically valuable insights for policymakers, election officials, regulators, and more.
  • A colleague and I presented work on the intersection of AI and child safety at a child safety conference in Dallas. This topic is very uncomfortable for many people, but it is critically important from a regulatory standpoint. "AI safety" is often in the news these days in the context of culture wars and electoral politics...while the very real impacts to women and children remain comparatively under-discussed.
  • A white paper I coauthored on the growing impact of autonomous AI agents and how we tell who is real online is up on arXiv as of Thursday (and got some coverage in WaPo). One of the things that struck me when I was doing some work on LLMs in 2020 was that detecting content made by machines was going to be increasingly challenging, and that increasingly sophisticated autonomous chatbots would make it harder to tell who was real. This very collaborative paper, with contributors across industry, academia, and foundations, looks at where things are headed and asks what kind of privacy-protecting credentials might balance anonymity and trustworthiness online.
  • I have a new post on Lawfare today about the Iran hack-and-leak operations. If you're interested in state actor election interference beyond the usual discussions of Russia, you might find it interesting!