May 6, 2026
My friends have stopped sending me news headline screenshots. Not because they don’t have things to share. There’s plenty we still share over DMs every week. But I suspect the dearth of news headlines DM’ed to me in the last few months is my fault. You see, I have this annoying habit of actually looking up what they send me. I’m not trying to be annoying, it’s just that these very intelligent people nevertheless often send me items so outrageous, so this-could-be-The–Onion, but without sourcing or links, that I have to look it up.
“Wow! This is wild. Is this real?” I text back…right before my irksome journalists’ muscle twitches, and I’m prompted to go look for myself. Frequently, I’ve found myself telling some friends that the “news” they shared wasn’t real and came from some social media influencer spreading engagement-goosing rage bait, or was seeded by one of a growing number of high-follower-count social media accounts or websites cosplaying as legitimate news sources.
My DMs are now less entertaining. Sure, friends still send me news articles, but now only occasionally. And, I assume, only sent after they’ve vetted them as true, or from a legitimate source. It wasn’t my intention to inadvertently train them to stop having fun on social media, rather, I’m doing everything I can to keep information slop from entering my own media diet, and it’s really difficult. AI has made this “clean media eating” even more arduous.
What’s In A Name? Increasingly, Everything
That’s why I found a new story about McClatchy and AI particularly timely. Journalists working at the publisher that produces The Miami Herald and The Sacramento Bee, as well as scores of others across the U.S., are withholding their bylines in protest against an internal AI tool from the publisher that summarizes their work. The catch? They’re being asked to attach their names to these AI summaries. “We don’t want to put our bylines on stories we did not actually write even if they’re based on our work,” Ariane Lange, a reporter at The Sacramento Bee, told the New York Times. “That in itself feels like a lie.”
According to McClatchy, the tool is designed to allow the publisher to produce more news stories in hopes of snagging more subscribers, and thus increasing revenue. “We need more stories, and we need more inventory,” Eric Nelson, vice president of local news for McClatchy, reportedly told staff. “Journalists who embrace and experiment with this tool are going to win… Journalists who are defiant will fall behind.”
This seems to be the growing sentiment in many of the largest newsrooms around the U.S.: adopt AI, no matter how it’s deployed, or you’re not being innovative and tech-forward, and are perhaps not a fit for our publishing plans moving forward. On some level, I’ve considered myself fairly AI-positive in most respects, even in media. But the more I’ve experimented with AI over the years with regard to text, the less I’m convinced that it should be trusted with the task of speaking for humans. Beyond the nuance of voice, there’s the still-nagging issue of hallucinations, which continue to cause trouble for everyone from journalists to experienced attorneys attempting to save time.
However, the thing that really jumped out at me in the McClatchy situation was the byline issue. For journalists, one’s byline is possibly the only sacrosanct thing the writer owns. It confers trust, reputation, identity, the human history of that person, and, most of all, accountability. Asking a journalist to affix their name to a chatbot’s output is not only breaking the reader-journalist chain of authority, it essentially prepares a publisher like McClatchy to eventually do away with human reporters in all but the most essential situations.
If you’re not a media nerd, I understand this may all seem like a lot of hullabaloo about nothing. “Who cares, as long as it’s accurate, I don’t care where or who it comes from,” some might say. But that’s exactly it. The thing about media and journalists that some fail to grasp is that trust and provenance are the very bones of what makes media relevant and worth reading. Knowing and respecting the source of news is what separates random gossip one might dismiss from actual news you may need to act upon.
Down in the Dregs of Data Dystopia
I love YouTube and Reddit. You can learn a lot from them if you filter well. But one of my pet peeves has been the normalization of people sitting in their bedrooms, turning on a camera, reading the Internet, and then speaking the words, “Last week, I REPORTED on what’s happening with [insert city/company/person]. We went over a lot of detail…” No. That’s not what happened. What happened was you read an article on a website, or on social media, and gave your OPINION on a story someone else did the work to report to the public. Sometimes you, dear influencer, even read, verbatim, every line of a reporter’s article in front of the camera as if that, voilà!, was reporting the story.
Initially, I decided that getting irritated by this new way of using a reporter’s work, often without ever mentioning their name, was futile, and I pushed it to the back of my mind. Then, over time, I noticed that more and more of these social media influencers were reporting stories from sources that weren’t actually legitimate news sources. These influencers began reading gossip tweets as sources, Reddit threads as statements of fact, secondhand chat sessions from yet another influencer, or some SEO-optimized clickbait site written in English but originating from somewhere in Asia or Eastern Europe with no masthead, office address, or editorial names, just a crude but effective echo of what a legitimate news site might look like.
The lines between what is true and what is potentially nonsense had been blurred. And then came AI. Now we’re contending with influencers who have 2 million followers consuming and regurgitating AI-generated, often hallucination-riddled “news” from websites and social media accounts that increasingly have just as many, if not more, followers and social media virality as true news outlets. The hall of mirrors that was fueled by social media influencers appointing themselves “journalists” in their process of re-reading the latest colorful headline has now become an entire landscape of mirrors where truth and fabrication often bounce off one another daily. And these “news sources” are consumed rapidly and haphazardly, depending on the discernment of the reader or the popularity of the influencer the reader follows.
You might think the situation is hopeless. That we might be doomed to a perpetual tsunami of suspect information slop unless or until the government steps in and somehow regulates the new media landscape (which would be another kind of free speech nightmare). Or maybe the whole new digital infra-slop-structure buckles under the strain of too much LLM-generated flotsam and jetsam and just collapses in on itself. Maybe. Or maybe not…
A Child Shall Lead Them Back to the Truth
A new study called “The Evolving News Landscape: Comparing Media Habits and Trust Between Teens and Adults,” published last week by The Media Insight Project, sheds new light on how younger audiences are engaging news in 2026. Specifically, the study found that a majority of teenagers (13 to 17) get their information from social media influencers or creators, but those same teens place far more confidence in local and national news compared to AI chatbots.
The paradox here is that many of these teens don’t realize that the AI chatbot news they place so little confidence in is exactly where many social media influencers are getting their news from, either directly or second-hand. A blog post of a viral tweet is no longer even necessary. I’m now regularly finding otherwise intelligent, educated influencers conduct live streams where they use Google Gemini or ChatGPT in real time to fact-check something they’ve just said. And when the AI tells them a “fact,” they look into the camera and give the audience confirmation, as if the truth has been settled.

Perhaps the most famous recent culprit of this kind of behavior is Joe Rogan, who now laces his episodes with quips like, Hey Jamie, check our sponsor Perplexity, and see if what I just said is true. Rogan doesn’t do this occasionally, he does it constantly throughout his most recent episodes, only occasionally acknowledging that AI isn’t 100% accurate at all times. Such caveats don’t really matter when you bake in an AI chatbot as the arbiter of truth on your show for a myriad of facts and figures regarding everything from politics to science to history.
Using AI chatbots as a source of fact-checking is seductively easy. Almost as easy as it used to be to Google. But with Google, people had at least developed the habit of saying, “But where did you find that on Google? Was the website reputable? Who came up with that information?” In contrast, AI chatbots lull users into false confidence that whatever they spit out is fact. Which, of course, is not the case (at least not in 2026).
The silver lining here is that while older AI users are trying to remain engaged with the cutting edge of technology, no matter what, it turns out that at least a significant portion of Gen Z and Gen Alpha news readers still trust humans for news more than AI chatbots. Which is why the McClatchy situation, and others like it, are important. In the same way that many social media influencers wrap AI chatbot information in a human presentation wrapper on video to transmit information trust signals to their audiences, publishers are now asking journalists to put their names on AI chatbot summaries to accomplish the same AI trust transaction. These publishers are asking humans to drape AI outputs in the trappings of human insight, nuance, and credibility. In effect, in exchange for a paycheck, these journalists are being asked to rent out their human name to an algorithm in order to engender the trust of readers.

I’ll let you make your own moral calculations as to what this new business model means broadly across other industries, but on a pure business level, this is a bad deal for journalists. Being human still means something. It still has worth, no matter what efficiencies any corporation might idealize otherwise. Asking a person to allow a machine to adopt their human name, in exchange for a salary, with no residual compensatory rights that might later pay that human for the ongoing use of their human-mask-byline, is just bad business for journalists.
Publishers engaging in this know what they’re doing, which is why they’re even trying to use human bylines. They know that humans still trust humans and that we appreciate the human layer. Likewise, many social media influencers aren’t wed to the ethical processes many journalists are taught early in their careers, so the idea of interrogating the veracity of information before spreading it is often lost on them. But when not discombobulated by social media white noise and the false urgency of manufactured virality, the public generally intuits the necessity of knowing where information comes from and who it comes from.
So while some Gen X and Millennials have sleepwalked their way into AI-information-as-legitimate, largely to maintain the profile of not being Luddites or, egad, old, younger readers seem primed to be a bit more discerning. According to the latest data, these younglings, who, by the way, happen to be the most authentically digital-native of all, have low trust in AI chatbots when it comes to real information. Now, if we can just get the older people still running some of these media organizations to understand this, maybe they’ll stop trying to get human journalists to loan their hard-earned names, and possibly careers, to AI, and we can focus on actually innovative ways to use AI technology to augment humans rather than replace them. ✍︎
New essays on AI, film, & entertainment innovation arrive by email. Subscribe to stay in the loop. All editorial text is written by humans.
Cover: A modified scene from the film ‘A.I. Artificial Intelligence’ (2001) via Warner Bros.

