Shopping for Receipts: How a viral video about Somali daycares triggered federal action

Shopping for Receipts: How a viral video about Somali daycares triggered federal action

Two weeks ago I wrote about the fusion of the rumor mill and the propaganda machine—how online claims get picked up by political elites who lend them credibility and turn them into political reality, whether they’re true or not. The Brown shooting witch hunt was the specific situation that galvanized an online mob.

A few days ago—before we suddenly renditioned the President of Venezuela—a story about Minnesota daycare fraud had the rumor-to-propaganda pipeline operating again at full speed: online rage was converted into federal policy within days. This case offers a unique opportunity to discuss what constitutes sufficient proof of a claim these days, and why fact-checking doesn’t do much in these situations.

Let’s talk about the phenomenon that I’ve come to think of as “shopping for receipts”.

The Minnesota Daycare Video

On December 26th, 23-year-old content creator Nick Shirley posted a 43-minute video to X and YouTube purporting to expose extensive fraud at Somali-run child care centers in Minnesota. In the video, Shirley knocks on doors. Sometimes no one answers. When someone does, he demands to see children but is never shown any. At each stop, his counterpart David announces—citing state billing records—that the operation is fraudulent. This is then considered proven when the two men don’t see any children, and because the employees are unwilling to speak with them or hostile to being filmed.

The video has currently been viewed over 125 million times across X and YouTube.

Within days of its release, Vice President JD Vance declared that Shirley had “done far more useful journalism than any of the winners of the 2024 Pulitzer Prizes.” Homeland Security announced that it was on the ground increasing investigative efforts. The FBI announced a “surge“ in the state. The Deputy Secretary of HHS announced on X that all federal child care payments to Minnesota had been frozen (a department spokesperson later walked this back).

Firmer facts related to the situation didn’t travel as far: according to Minnesota’s Department of Children, Youth and Families, one location Shirley visited had been out of business for three years. When state regulators had conducted previous unannounced visits to the sites in the video, they found children present; they were also present at inspectionsmade following the video. Shirley also visited at least one at a time when it wasn’t open; a nearby member of the online community of liberal political streamer Destiny apparently drove to the location of one center and was one of the first to post the operating hours not shown in the video, offering a counter-receipt. Other media have since reported from the centers visited by Shirley.

The situation reminded me of election rumors I’ve studied over the last decade: there’s always going to be a rare but nonzero amount of illegal voting (a few people across the country), as well as something that goes wrong with a machine somewhere. Those instances are leveraged to cast doubt on the integrity of the entire enterprise; the accusers repeat three word phrases—“stop the steal!”—to convey their message, while the defenders issue detailed, nuanced explanations and denials of wrongdoing to people who are predisposed to distrust them.

Obviously fraud is real, and Minnesota has had significant documented cases. Mainstream as well as local media, in fact, had been investigating and reporting on demonstrable fraud in the state for years—including related to childcare. But the purpose of this post is not to adjudicate fraud. It’s to describe the rumor-propaganda pipeline that increasingly converts suspicion into retribution before verification actually happens. Because this happened with USAID. With DOGE cutting random grants, programs, and “vampire” social security funds that it misinterpreted.

Once government action is tethered to viral content, belief careens into policy while evidence gets cut out of the process.

Evidence → Conclusion vs. Conclusion → Evidence

Investigative journalism, at its strongest, relies on a method:

  1. Start with a tip, question, hunch, or anomaly
  2. Gather records, interviews, data
  3. Test hypotheses – including the boring ones
  4. Publish what the evidence supports, plus what couldn’t be verified

The story earns its claims, and at a major publication, has to survive legal review.

Hyperpartisan influencer culture often runs the pipeline backwards:

  1. Start with a vibe or identity-based claim (betrayal, aggrievement, “we are being suppressed”, “the vaccines have athletes dropping dead”, “the election was rigged”)
  2. Decide which outgroup or villain is responsible
  3. Hunt for plausible support (screenshots, clips, “expert says,” FOIA snippets)
  4. Incorporate the audience into the assembly line (”do your own research”) and amplification process

Repetition and confidence in delivery serve as validation.

Journalism is evidence → conclusion. This is, at best, conclusion → evidence. The verdict comes first; the hunt for proof comes later.

I call this shopping for receipts: influencers start with a belief or suspicion, then go looking for artifacts that will read as evidence to people who already agree. A receipt, in ordinary life, documents a transaction—you paid, you got something, the work was done. But in this context, the “receipt” is mostly just a prop: a screenshot, a video clip, large numbers on a spreadsheet. It looks like proof but there’s no real process behind it.

That’s why this isn’t only a reversed order of operations. It’s a different information logic entirely: it’s not a conclusion so much as a belief, supported by justifications. The belief relates primarily to identity, and group membership—not facts. The justification just needs to make the belief feel reasonable.

The Minnesota daycare video illustrates this difference. The beliefs—that Somali immigrants are criminals, that the money they’re getting from fraud is funding terrorism, that Tim Walz doesn’t care—were already circulating. Trump called Somali immigrants “garbage“ at a cabinet meeting in December. City Journal published an unsubstantiated claimlinking fraud schemes to Al Shabaab. The frame had been established—both with respect to the Somali community in particular, and to immigrants writ large.

Shirley’s video wasn’t a methodical collection and review of evidence to reach a conclusion; it was content produced in response to a belief. The “evidence”—employees reluctant to speak, buildings without visible children—clearly felt like proof to many thousands of folks commenting on X. To those outside the political faction, however, the behavior of the daycare employees seemed like a reasonable response to two strange men with a video camera showing up demanding to see children.

When a belief is socially rewarding—when it flatters your side, confirms your suspicion, humiliates your enemies, or offers a clean explanation for a messy world—the evidence doesn’t have to be strong. It just has to be available, shareable, and understandable at a glance.

When mainstream media, for example, debunked the eating-the-pets rumors in Springfield, right-wing influencers descended to find “evidence” that the media was lying. The closest they got was grainy years-old footage of unidentified animals (possibly chickens) on a grill at a house inhabited by Africans (not Haitians)—a motte-and-bailey retreat from a sweeping allegation to a niche claim that was still poorly defended. But to the community of believers, it was sufficient.

Virality as Governance

There’s another dimension here worth mentioning, and reinforcing from my last post: elite authority figures once served as a firebreak, calling for cooler heads or complete investigations. Today, rapidly jumping on a wildly-viral rumor about an “outsider” harming “the people” makes an exclusionary populist government look incredibly responsive. The Administration’s actions in Minnesota—cutting off funding, stating they’ll deport fraudsters, etc.—makes them appear to be listening directly to the public while mainstream journalists and state investigators look slow, procedural, and out of touch. The New York Times reports that state regulators found no evidence of fraud; the federal government acts anyway. Who looks more in touch with ordinary Americans’ concerns?

This is a mutually beneficial system. The political influencers get views, subscribers, and access. The administration gets to demonstrate that it acts on what “real Americans” care about, not what the “legacy media” prioritizes. (In this specific case it’s also worth noting: the Minnesota story pushed the Epstein documents out of the news cycle) The base, which is an active participant in tossing out names for investigation and amplifying content, feels that they are being heard. Everyone wins—except the communities unfairly caught in the crossfire, and the epistemic norms that used to govern how allegations become action.

As more investigations get underway, there might well turn out to be fraud at the specific centers in Shirley’s viral video. If so, I’ll be glad to see charges filed and perpetrators held accountable; stopping online scammers is part of my job, and I have no patience for people who manipulate and steal. But investigations take time. Methodical, deliberative processes often feel very slow. They don’t have enough of the visceral owning-your-enemies energy that some extremely online factions have come to expect, and crave.

Shirley’s video, by contrast, delivered that.

Why Debunking Doesn’t Work

If we were in a situation where evidence → conclusion, then correcting the misperception, or adding nuance might change people’s minds. But things are flipped in the process that this video is a part of. The conclusion is upstream of the evidence—the belief preceded the video.

This is why fact-checking so often feels like bringing a knife to a gunfight. It assumes that people are working from evidence to conclusion, and that if one points out problems with the evidence, people will see the flaws in the conclusion and change their minds. But content like this is operating in the “belief → justification” realm. Swat down one justification and the belief can draw on another. The video isn’t why the audience believes Somali daycares are scams; it’s a performance of that belief.

This is why I think “propaganda” is often a very appropriate term for political influencer content. Not as a pejorative, but because propaganda is information that is shaped to produce a particular attitude, or activate a particular group. It starts with a belief it wants people to hold. And it’s not always false; it often incorporates selective facts to tell a convincing story.

Tells

If you’re trying to recognize when you’re being pulled into belief-first justification content, here are some tells:

Conclusion-first framing: The verdict comes before the evidence. Shirley announces fraud at each stop because he doesn’t see children—no alternative explanation for their absence is considered.

Cargo-cult empiricism: Spreadsheets, timestamps, or “forensics” language used as aesthetics. The Shirley video cites state billing records and “big number” dollar figures, giving it the look of quantitative research without the actual investigative work of verifying or explaining what the numbers mean.

Negative-space proof: ‘Absence as evidence’ as the core argument. No one answers the door; silence becomes guilt—even if a facility was closed at the time, or the person declining to speak appeared scared.

Motte-and-bailey: Make a sweeping claim, retreat to a smaller technically-defensible one when challenged. In the video Shirley stated he had found $110 million in fraud; yet when pressed on this number since, Shirley and his supporters have retreated to “you can make up your own mind”.

Unfalsifiability: No kids are at the daycare—fraud. Kids are there on other days—they’re being bussed in, still fraud. If regulators find fraud—belief confirmed. If they find no fraud, it’s a cover-up—belief still confirmed. When nothing can change the investigator’s mind and all counterevidence is dismissed, it’s not really an investigation. This one takes some time; it remains to be seen how it plays out in this specific situation.

These tells don’t prove a claim is false—they just signal that a conclusion likely preceded the evidence-gathering. Spotting them suggests that it’s worth pausing and looking for more rigorous verification.

What’s To Be Done

The incentive structure of social media means that “shopping for receipts” isn’t going anywhere. After Shirley’s video went viral, another creator posted a video of himself knocking on the door of a child care center in Columbus, Ohio “associated with the Somali community.” Nobody answered. “We’re just getting started,” he said. Another aspiring influencer door-knocked Somali family daycares in Washington State. This isn’t a response to real signals of fraud—it’s a response to visible rewards: money, attention, and access. Shirley gained ~170,000 YouTube subscribers following his viral video. Andrew Tate asked how he could donate to support. Shirley and fellow creators had previously been warmly received at a White House roundtable by Trump, who called them “very brave patriots” and said he’d consider awarding them “some very important medals and honors.” The current Administration is highly engaged with sympathetic creators. When the message is that viral accusation is civic virtue, you get more viral accusation.

The pipeline from rumor to policy is now shorter than the time it takes to verify a claim. That’s a remarkable—and dangerous—shift. People have always believed rumors, but now government elites ride them when they provide justification for what they wanted to do anyway. “Shopping for receipts” isn’t just a rhetorical move—it’s a substitute for real inquiry.

We still get to choose how we respond. The reality is that a viral video does sometimes force accountability that slower, more cautious processes fail to achieve—particularly if the public clamors for action. The challenge is trying to incentivize keeping the content rooted in reality. The goal is to separate the signal from the noise: harness the accountability that viral exposure can lead to, while rejecting the belief-first propaganda that often comes with it.

I plan to write a lot more about the dynamics of modern propaganda here this year…so, if you found this newsletter informative, please share it with friends or family who might be interested in the topic. 🙂

Happy 2026!