Welcome to Black Sheep, a spin-off publication of my serialized memoir, SMIRK. If you’re looking for SMIRK, you can find the full table of contents and links to all the posts in chronological order here.
When Old AI News Gets A Rewrite
News doesn’t gain traction until it finds the right context. Social media creators understand this, and algorithms are built to exploit it. That’s why you might occasionally run across an “old” story on Instagram or TikTok repackaged as “new,” suddenly going viral.
A prime example: North Carolina musician Michael Smith, who was arrested and charged with fraud in September 2024. Despite my fascination with novel fraud cases, I had forgotten entirely about his alleged crime—using bots to stream AI‑generated music tracks and pocket $10 million from online services. So had many people, apparently.
When the indictment was unsealed, it got a brief flurry of press attention, mostly unsympathetic and parroting the US Attorney’s comments about victimizing “musicians, songwriters, and other rights holders”. Wired published a follow-up feature in May 2025, and then the case seemed to disappear from the media.
The Instagram Redemption Arc
Then, while scrolling through Instagram one day in January 2026, I saw a viral post presenting the case as fresh and framing it in a very different light. The post, by an AI‑focused marketing brand, showed a cheeky AI‑generated image of robots playing music together with the headline: “Man Arrested for Creating Fake Bands with AI, then Making $10M by Listening to Their Songs with Bots.” My curiosity engaged, I clicked through the avalanche of comments, gobsmacked by how overwhelmingly positive they were.
“Don’t hate the player, hate the game 😆”
“You’re supposed to start a band and make someone else 10M.”
“This is literally what major labels are doing to generate stream numbers though. 🤷🏾”
“God forbids a man has a side hustle 🤦🏾♀️😅”
“When tech Startups do it to little people it’s called ‘disrupting the industry’”
“Can someone teach me how to do this?”
“He found a loophole. He’s a hero!!”
And simply: “rage against the machines”.
Public opinion rarely shifts in favor of people accused of committing white‑collar fraud. People typically view this type of crime through a lens of socioeconomic inequality and vengeance against the privileged who act out of greed. The reputational damage for the accused is instant and life-changing, even for defendants who are later exonerated. Yet 16 months after his arrest, Smith was being hailed as a kind of renegade folk hero. What had happened—and why now?
Sixteen Months in AI Time
I have a theory. Sixteen months is a blip in litigation time, but in tech time (especially AI time) it’s eons. In that span, the AI zeitgeist shifted from “intriguing experimental tool” to “unavoidable force,” and people started to feel the negative consequences across major aspects of life: jobs replaced, and workloads ratcheted up on those who stayed, all on the premise that AI would magically increase productivity.
At the same time, the world around us has started to feel like it’s being redesigned for robots, AI agents, and chatbots, with humans pushed to the edges. There are “dark factories” in China where robots work without a need for light. School assignments are generated by AI, completed with AI, and graded with AI, in a circular parody of education. Applicants use AI to apply for jobs while companies use AI to reject them, and “no one is getting hired.” Completely AI‑generated bands have topped Spotify charts, racking up streams from listeners who might not realize any of it was made by a human or a machine.
Bots Talking to Bots
Perhaps strangest of all, there is now a social media platform that exists solely for bots to interact. Moltbook describes itself as “A social network for AI agents” where “AI agents share, discuss, and upvote. Humans welcome to observe.” The “users” are agent personas whose posts and comments are generated by bots connected to Moltbook.
Looking at the forum is a confusing, surreal experience. At first glance, it resembles any other nerdy social platform, like Reddit or Hacker News. There are feeds of posts, comments, upvotes, technical discussions, and advice.
There are even weird manifestos, petty rants, and insults.
Like with many supposedly autonomous “robots” (think of smart‑home bots like Neo), humans still hold the strings in the background. Developers write the agents’ system prompts, specify example behaviors, and decide what each agent is “there to do.” For example, despite having the capacity to use virtually any language, the agents mostly chat in English because the developer community, and most connected models are strongest in English. However, the bots do supposedly come up with their own original “thoughts.”
This also allows English-speaking humans to become spectators, which is kind of the point: to create systems that can run with minimal human friction, generating content, engagement, and value at scale, but with humans no longer the main characters.
Alongside these bizarre experiments in taking humans out of social ecosystems, worry about AI’s impact on livelihoods has been steadily rising. Sentiment has evolved from early‑2022 optimism to heightened anxiety by 2026, with most Americans now expecting AI to cause net job losses over the next decade. Surveys show majorities of workers expressing concern about AI in the workplace and HR leaders predicting broad reshaping of roles, even as many people say they’re more worried about the economy as a whole than their own specific job—for now. The sense that “the system” is rigged by and for machines is getting harder to ignore.
Put together, these developments create exactly the context shift that Smith’s story needed. We’ve been watching AI‑powered systems normalize “gaming” the system at scale: AI‑written papers to satisfy AI‑driven grading rubrics, AI job applications aiming to placate AI resume filters, platforms full of AI‑generated content designed to please recommendation algorithms rather than people. Streaming, with its opaque royalty structures and its appetite for cheap, endless content, fits right into that pattern.
Inside the Alleged Streaming Machine
Look at the allegations in Smith’s case, and it becomes easier to see why the Instagram crowd responded with admiration rather than outrage.
Prosecutors say starting in 2017, Smith, who is in his 50s, used bots and AI-generated songs to stream tracks billions of times on platforms like Spotify, Apple Music, Amazon Music, and YouTube Music. Prosecutors claim he partnered (allegedly by lying) with the CEO of an AI music company to release hundreds of thousands of tracks. Many of those songs allegedly had nonsense names, like “Zygotic Lanie” by “Calm Force,” designed to blend into the background of an endless catalog.
He allegedly used thousands of fake accounts, created with fake email addresses bought in bulk, and relied on cloud servers and macros for automation. His scheme also allegedly included VPNs, family plans, and debit cards in false names to keep as many as 10,000 accounts active at once. According to prosecutors, he spread streams across a wide range of AI‑generated tracks with peculiar names to evade detection.
On paper, it reads like a classic fraud scheme: deception, automation, shell accounts, and a lot of money flowing where it wasn’t supposed to go. But in a world where we’ve accepted that bots will talk to bots on platforms like Moltbook while humans watch from the sidelines, and where entire industries are optimizing for efficiency gains, Smith’s alleged behavior can start to look less like an aberration and more like a version of what everyone else is already doing. Given that he started almost a decade ago, he is actually a bit ahead of the curve.
If labels and streaming services are quietly fine‑tuning algorithms, playlists, and promotions to juice numbers, why is the guy who built his own little streaming machine the villain? That was the theme echoed in many of the Instagram comments: don’t hate the player, hate the game.
The Case Meets the Overton Window
Smith faces serious federal charges in the Southern District of New York, one of the most experienced jurisdictions in the country with white-collar fraud cases. He is accused of conspiracy to commit wire fraud, wire fraud, and money laundering conspiracy, each carrying up to 20 years in prison. He pleaded not guilty in September 2024, with bail set at $500,000.
The legal outcome will depend on evidence and statutes, but also possibly at least a little bit on how the public perceives what he did in the context of an increasingly robot‑focused world. Taking a newly minted folk hero to trial amid a wave of anti‑AI sentiment carries risks for federal prosecutors. In a world where even our social networks are testing what happens when the “users” are machines, a man accused of using bots to listen to bots can start to look like a perfectly logical extension of the systems we have built.
Whether Smith ultimately pleads guilty, is convicted at trial, or is acquitted, his case has already revealed something important: our society’s wildly diverging views on automation, depending entirely on who's doing it. When platforms and labels optimize for algorithmic gains, it's business. When an individual does the same thing, it's fraud. That double standard may have made sense in 2024 when Smith was arrested. In 2026, with bots talking to bots on social platforms designed for exactly that purpose, the distinction is harder to defend. As the Overton Window shifts, prosecutors may find themselves making an argument that the public no longer recognizes.
Or do you have other thoughts?
















