April 29, 2026
We are truly in a wild media landscape when an attempted presidential assassination occurs, and just three days later, it seems like everyone has already moved on to King Charles’ U.S. visit, the latest gas prices in relation to the Iran war, and when the White House Correspondents’ Dinner will be rescheduled so the champagne and ball gowns can once again flow. The fog of chaos has overtaken us as it did in 2020 during the riots, lockdowns, and looting throughout the U.S.
In fact, I don’t think we ever shook off that miasma of simmering discord. It just morphed and changed shape, without ever fully releasing its grip around our drama-fatigued minds. In my view, the only cultural touchstone that has effectively managed to frame this rolling U.S. cultural schism is the 2025 film Eddington. Directed by Ari Aster and starring Joaquin Phoenix, Emma Stone, and Pedro Pascal, the film is set in 2020 and uses an AI data center as the looming meta-antagonist that sends a small New Mexico town into technology-fueled mania that leads to random acts of violence.
By the end of the film, violence becomes the common language, indistinguishable from ideological positions or personal motives. Chaos becomes normalized. This cinematic parable was so truthful and honest about who we are right now that the Academy Awards didn’t even bother to nominate it. The solipsistic slapstick revolutionary fantasy of One Battle After Another and the parochial period horror of Sinners were disconnected enough from 2026 to make them far more palatable award winners. But Eddington was us in 2020, and it’s us in 2026. The unraveling never stopped, we just got used to it.
Righteous protests over police brutality mutated into looting, and the looting gradually transitioned into group smash-and-grab robberies in places you’d never expect, in broad daylight. In turn, those have given birth to “teen takeovers” where hundreds of teens march through cities doing, well, whatever they want.
And now it seems we have entered the era of political violence, once again. And somehow, depending on your political affiliation and the target, the incidents are often cast as kind of “no big deal.” Outrage against one’s enemies is now casually meme-coded as fodder for social media quips and podcast humor. A part of me wonders if this is what it must have been like in the 1960s and ‘70s. That period offered a fascinatingly familiar cycle of cultural and political tumult during which foreign wars, racial issues, religion, and gender rights all boiled over into various, sometimes force-backed conflicts in American cities. But that all happened without the lighter fluid of social media to stoke the embers of right-leaning, left-leaning, and everyone-in-between-leaning citizens with rage bait until they finally acted out their philosophical viewpoints as real hostility.
We are essentially re-living that history on different fronts, but with vision blurred by the haze of confusion the pandemic kicked up, and pressured by unrelenting inflation and a seeming habitual mass layoff dance—now pre-blamed on AI—that started in 2023 and hasn’t stopped. Now take all that and add a dose of nearly every major AI CEO telling the public they’re doomed because most white-collar jobs will soon be automated, and the blue-collar jobs are next when the robotics are ready. The message: Start planning how you’ll either subsist on Universal Basic Income (if you’re lucky), make gig-work work, or somehow get rich before all the jobs are gone.
The Warning Signs
I’m regularly engaged with traditional visual artists, musicians, and filmmakers who, to varying degrees, either passionately hate AI or are trying to figure out how to use it to stay conversant in the next phase of the tech-enabled creative landscape. But these people are not the majority of the people in our society. Our artisans are a relatively small cohort. Their anger is being mediated and, in some cases, outright ignored, depending on your position. However, the general public has begun to come to grips with AI, and many of them are worried. Some are even angry.
As you might guess from the topic of these weekly writings, I’m still a tech optimist. That’s why the weekly roundup is called Man “AND” Robots, not Man “Versus” Robots. I think there’s a way to make AI work for us in a way that adds, not subtracts, from who we are as individuals and a society. But I’ve already exhausted my own AI angst long ago and circled the bend without crashing out. I am not, however, convinced that the general public will make that same AI-as-crisis-to-possible-augmentation journey without first introducing the space to some of the aggression we’re seeing in other parts of our lives now. I sincerely hope I’m overstating this. I’ve never wanted something I’ve written to be wrong so much as now.
Still, the signs are there. “Fuck AI” is now cool to say in some entertainment circles. Children sling insults against poor efforts by calling them “AI slop.” But what happens if some take that to the next step? Maybe a data center vandalized. Which some readers might applaud. Maybe a hacker movement dedicated to foiling all corporate AI, leaving only open source AI and government AI viable. Sounds like a speculative fiction William Gibson novel…until it happens. Again, some readers might applaud such a thing. But what happens when, say, some AI CEO has her home attacked (of course, I know that has already happened), or experiences something even more direct? Is someone’s anti-AI stance and their belief of what it may do to our human future enough to justify such a thing? My opinion, in 2026, is no. But we are in an historic moment where the course of humanity is being decided, and some people have framed AI in those terms, and may feel the need to do unorthodox things to address the situation.
It might sound overly dramatic and hand-wringy, until you remember:
-AI company Palantir just posted a political treatise on how society should work, and they intend to prosecute that somewhat authoritarian vision.
-This week, two of the most powerful men in AI, Elon Musk (xAI, Tesla, SpaceX) and Sam Altman (OpenAI), started a court case that is essentially over who gets to profit from AGI if it ever comes, and in the meantime, who gets to reach a trillion-dollar valuation first based on AI reshaping the American workplace.

-Former presidential candidate Bernie Sanders, the senator from Vermont, has reshaped his entreaties against the “1% billionaires” into emergency-level calls for AI regulation. Tonight, Sanders will hold a solemn public forum titled “The Existential Threat of AI,” with several AI researchers, including Max Tegmark, an MIT physics professor who has saidthat scaling AI rapidly will “cause humanity to lose control of artificial super intelligence” and give rise to an Orwellian “non-human Big Brother.”
-University degrees, now powered by legions of AI-using students, are being completed, and these would-be workers have AI’ed their way out of college and into entry-level jobs that are vanishing because of the very tool they used.
I could go on, but you get the picture. The tension is high, and it’s only rising.
The AI Wars to Come?
Which brings me back to how I started this entry—talking about the alleged White House Correspondents’ Dinner assassination attempt by Cole Allen. When I wrote the story over the weekend, I spent hours looking into Allen’s background. This man is a legitimate technology expert. He built prize-winning robots during his undergrad studies at Caltech and went on to complete his master’s degree in computer science just last year. Yet with all of that going for him, something about this moment in history led him to a place that could have ended in tragedy for a number of families.
Some of Allen’s Caltech colleagues went on to work at tech giants like Google, NASA, and Raytheon, places he might have ended up himself at some point, given his technical acumen. Despite Allen’s social-media-casual manifesto, we may never have a truly reliable answer as to exactly why he did what he did. But I suspect that part of his story involves the influence of the constant, frenzied hum of outrage, on all sides, that is stoked daily on social media. Online discourse, be it private chatrooms, video channels, or public threaded forums, can and has resulted in real-world violence. People like to say “the Internet isn’t real.” I used to agree. Today, I disagree. The Internet is real when we make it real. Increasingly, some of us are deliberately making what happens on social media bleed into real-world events.

I say all that to say that I can see the AI violence train heading toward us. Some of the anti-AI people I know absolutely have the right idea and the best of intentions. But the people I know of are largely operating by engaging the political process, enlisting corporate and public support, and advocating loudly on large platforms for their positions. But not everyone is wed to process and orderly institutional-meets-cultural conflict. I believe that others will increasingly view AI as an existential moment of human peril that transcends the rule of law. And some will act accordingly. I don’t wish it so, I am merely looking to our history as a species for clues, and the hints are difficult to ignore.
When the tech elite at the leading AI companies are publishing government-tinged polemics and regularly visit podcasts to tout their political views (remember, this is a new thing, tech moguls used to keep their politics limited to their weekend poker groups, or semi-discreet lobbying groups), should we be surprised when we eventually find a group of similarly technically brilliant computer scientists and their friends on the other side of the AI argument? There is a widely reviewed book titled If Anyone Builds It, Everyone Dies. It doesn’t matter that many AI experts dismiss its author, Eliezer Yudkowsky, as too doom-obsessed, especially when Anthropic’s CEO, Dario Amodei, sounds almost as dire in his predictions.
Where does all that ominous “AI will destroy us” talk lead? To peaceful human protest? Maybe. Maybe not.
Right now, this may all sound a bit fantastical. And hopefully, it will turn out to be. Maybe AI will resolve into just another tool of efficiency operating in the background as humans continue their lives as before, just AI-augmented and higher-performing. But if we do begin to see violent AI revolt, it won’t be anything like the Luddite rebellion of 1800s Britain. The perceived stakes will be much bigger than thinking man versus inert machine. The call to arms will rally against the notion of allowing machines to think for us and, by extension, allowing corporate leaders to think for us. Existential indeed.
It’s interesting that most of the robot apocalypse stories from novels and films cast the agent of conflict as AI or AI-powered robots. The reality, it seems, is that the only truly near-term robot war will likely be fought human-against-human, with AI on the sidelines collecting the data. Now might be the time to think ahead and consider what happens to even that data after the AI wars are done.
That consideration may actually lead to an answer to the biggest question swirling around the AI debate: Should an algorithm be allowed to dictate any aspect of human life, even if the data is accurate and seemingly helpful? Is chaos, away from algorithmic order, an inviolate part of the human experience that must be protected, even at the expense of potential efficiencies? Or will we choose a kind of stochastically ornamented algorithmic existence, and the absence of the human agency of chaotic choice it promises? I hope we can settle all of this peacefully. But human history doesn’t point in that direction. ✍︎
New essays on AI, film, & entertainment innovation arrive by email. Subscribe to stay in the loop. All editorial text is written by humans.

