April 8, 2026
“I will share a recent anecdote that really struck home for me… I went to my first robot cafe,” said OpenAI CEO Sam Altman on Monday, hands blithely conjuring his now-familiar sincerity of personal life transparency spell on his employees.
“I was so excited to try it. I thought I was going to love it. And it was the most underwhelming experience. I thought I was someone that did not need a barista at Starbucks to smile at me and say ‘Hi,’ and ask how my day was going. I really thought I didn’t care about that. Turns out that I really want that. Walking in [to the robot cafe] to push on the screen and have the robot do the thing and give you a delicious cup of coffee was, like, deeply unfulfilling. I don’t want this experience.”
Sam, I feel your pain. Or rather, your patron’s nagging discord in the absence of human communion. It’s exactly the same thing I felt when I started reading the AI policy paper you and your OpenAI team released on the same day as your video chat session. I stared at the dense 5,417 words of AI utopianism titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First,” [PDF], and thought, this has to be AI-generated. So, I decided to check.
I must admit that even I was surprised that the result didn’t come back 40% AI-generated, or 65%, but a full 100% AI-generated. I don’t often see a 100% result. Of course, I’ve written about how AI checkers are not bulletproof (yet). But this particular AI checker has served me well, despite all the literary tests I’ve thrown at it, so I’m inclined to lean in favor of the stated result. Amid the 5,417 words, I was able to find 37 mentions of humans or people. These mentions are all invariably framed in terms of how AI (mentioned about 73 times) can help these humans or people.

But let’s be practical. An AI company using AI to write its policy papers is hardly surprising. In fact, I kind of expect it. But what gave me the same twinge of discomfort that Altman felt getting his cup of java from automatons instead of humans is that OpenAI used AI to talk to humans about humans.
If there’s any moment at which you might be compelled to harness the PhDs working at OpenAI to unsheathe their well-worn essay writing skills, you’d think it would be here. Getting humans to directly address the fate of humanity and how OpenAI plans to aid in its prosperity despite AI concerns seems only fair. But… apparently, nope.
Although I’m generally an advocate for humans writing to communicate with other humans, I am not an anti-AI refusenik. I think AI-generated text has its place in certain contexts. But when the topic involves trust, empathy, and a personal point of view of how we live with one another, I think respect for the human connection deserves more than letting an algorithm regurgitate an insipid approximation of what you might have written. If you don’t care to take the time to use your own words, why should I take the time to read those words?

It’s funny because OpenAI’s policy paper opens with the title, “Let’s Talk.” It’s so conversational and friendly-slap-on-the-shoulder familiar-sounding that you might be prompted to grab your own cup of bean water before cozying up with the 5,417 words of presumably warm, human “let’s chat” connection. But the actual contents of the paper read less like human connection (even for a typically dry policy paper), and more like a taxonomist’s cyclopean eye-lens peering over the splayed-out form of humanity, suggesting its fate in what it believes the subject might accept as altruistic.
It’s almost like the policy paper was rushed out to, I don’t know, respond to something? Like maybe a damning exposé in The New Yorker by Ronan Farrow and Andrew Marantz, also published on Monday, describing in excruciating detail what appears to be a long pattern of Altman’s alternating soft-touch ingenuousness and chilly mendacity.
If the fog of international war, soaring gas and food prices, and unrelenting surrealist social media news alerts weren’t enough to distract you from bad OpenAI news, maybe Altman’s “people first” treatise might. Alas, despite the piece being over 16,000 words long, the fact that the writers spent 18 months researching the story, and actually had the temerity to write the piece using their own brains, will make it a must-read for anyone interested in OpenAI’s plans for the human race.

Which brings me back to Altman’s anecdote that I quoted at the start. His robot coffee tale was in response to a viewer of the live chat who asked, “As AI becomes more capable, which human qualities do you think will matter most in the future?”
His first thought about what we can retain in the future as human beings in the face of AI wasn’t the poetry, music, art, or literature of our greatest humans. No, that would not line up with OpenAI’s generative AI goals and a potential $1 trillion (yes, with a T) IPO later this year. Instead, Altman’s first thought about how we might retain what is great about humanity in the future is how great it will be for a human to continue to serve him his large shot of espresso in the morning.
History may log that moment as the existential jolt of caffeine the public needed to realize what some AI masters like Altman have mapped out for humanity in the coming decades: the gentle tyranny of lowered horizons and nominal purpose. Believe it or not, I’m usually an AI optimist. And I respect and admire my local barista more than any bartender I’ve ever known. But something in my futurist-positive metaphysical gut tells me humans aren’t going to put up with this shit. ✍︎
New essays on AI, film, & entertainment innovation arrive by email. Subscribe to stay in the loop. All editorial text is written by humans.
Cover image: OpenAI CEO Sam Altman via OpenAI Forum

