April 15, 2026
I think I’ve been in the AI trenches too long. What has seemed glaringly obvious to me after four years of wrestling with LLM chatbots has clearly not fully sunken in for a number of other experienced journalists. The fundamental understanding that you cannot yet trust AI to reliably do your research or to write for you is a lesson that is still being learned in real time.
FIBRUARY
In February, Ars Technica, one of the most trusted names in technology media, terminated its relationship with writer Benj Edwards after it was discovered that one of his recent articles included fake, AI-generated quotes. “I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline,” wrote Edwards, explaining what happened in the aftermath. “When the tool refused to process the post due to content policy restrictions, I pasted the text into ChatGPT to understand why… I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words.”
Edwards went on to say that the rest of the article was human-written, and that it was an isolated incident. “None of our articles are AI-generated, it is against company policy and we have always respected that,” wrote Edwards on Bluesky. “I sincerely apologize…” I absolutely believe him. Edwards has a long history of turning in solid tech journalism for the likes of PCMag, Fast Company, and Macworld, and has been writing about technology long before the advent of commercial AI chatbots.
“That this happened at Ars is especially distressing,” wrote Editor-in-Chief Ken Fisher. “We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy… Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.”
Edwards, who had been writing for Ars Technica for nearly four years, clearly didn’t need AI. Nor should all of his prior work suddenly be tainted by his recent editorial stumble. He’s a real tech journalist, with real talent. But in a time when differentiating AI-generated text and research from human-crafted journalism is one of the last bulwarks defending the viability and fragile reader trust in mainstream media, Ars Technica was compelled to protect its long-held pedigree in whatever way it saw fit.
MARCH MASK-NESS
In March, The New York Times severed ties with British freelance journalist Alex Preston after an investigation led to the writer admitting that he had used AI to write a review for the media brand. The end result of his AI use was that the chatbot pulled language from a Guardian review of the same book. Preston wasn’t some fresh-out-of-university newbie suffering from imposter syndrome. He has published six books, studied at Oxford, has a PhD from University College, London, and has written for Harper’s Bazaar, The Financial Times, and The Economist. This is not what most people expect an “AI cheater” to look like.
In the same month, Peter Vandermeersch, a journalist in Europe with almost 40 years of media experience, was suspended by Mediahuis (the publisher of the Irish Independent and the Netherlands De Telegraaf, as well as many other brands) due to the use of AI in his work. Like Edwards, AI chatbots led Vandermeersch down the path of AI-generated quotes that didn’t exist. Following the incident, Vandermeersch explained what happened via his Substack. “I used AI language models such as ChatGPT, Perplexity, and Google Notebook while writing,” wrote Vandermeersch. “I was enthusiastic about the possibilities these tools offered and wanted to experiment with them extensively. Even I—with all my years of experience and knowledge—fell into the trap of hallucinations.”
APRIL FOOLISH
And just this week, Mediaite suspended one of its founding editors, Colby Hall, after Status discovered “more than a half-dozen instances… in which Hall appeared to invent stories out of thin air, fabricate quotes, or misattribute reporting to the wrong person or outlet.” Semafor also reported on the odd errors in Hall’s One Sheet newsletter for Mediaite, writing that it, “has raised questions about whether its use of AI to aggregate news is leading to hallucinations.” When contacted, Hall told Semafor that “while he uses AI in a ‘limited way,’ all ‘written ideas, angles, summaries, takes, and editorial judgments are mine.’”
However, when Status contacted Mediaite’s Editor-in-Chief, Joe DePaolo, the outlet denied that AI was the issue. “We presented your findings to Colby Hall, who insists the errors were purely a result of sloppiness in how he aggregated and categorized information, not from the use of AI. Regardless, it is completely unacceptable, and Colby has been suspended from Mediaite pending further investigation.”
Like the others, Hall is a journalism veteran, with media experience dating back to the early ‘90s, and major names like MTV, VH1, HBO, and iHeartMedia on his resume. Despite his editor’s comments casting doubt on potentially AI-generated errors, the fact remains that Hall told at least one media outlet that he does use AI in his work.

MAY THE AI FORCE BE WITH YOU, CHATBOT PADAWAN LEARNER
Unlike the 2025 case of Margaux Blanchard, who fooled Wired and Business Insider as a remote writer, turning in AI-generated stories that later had to be scrubbed from the internet, the aforementioned journalists all had long and verified associations with their publishers. None of these people were anonymous scammers riding AI to cash a check. Nor were they fresh-faced new employees hoping to lighten their load with a little AI magic dust help. Each one of these reporters had the skill and experience to turn in solid journalism work without any AI assistance. Yet in all cases, they did. Sometimes sloppily to cut corners, sometimes with the honest intent to use AI as a new digital tool in their arsenal.
I wanted to pull all of these recent AI writing stories together in one place to reinforce a message that every journalist should hold close: you cannot trust AI to do your work for you. It’s just not there yet. I remember back in 2022, when artists thought AI-generated images would continue to get the number of fingers wrong on a human hand. That didn’t last long. Likewise, there may be a time when AI is good enough that writers will be able to trust it not to hallucinate quotes and fabricate events that never happened. But for now, that’s not the case.
But that doesn’t mean you can’t use AI in your writing process. You can, if you do it carefully and correctly. My favorite analogy for where AI chatbots are in terms of research and writing is the human intern. If you’ve ever had an intern work for you, you know that most of them actually require more work from you rather than making your work universally easier. Occasionally, you’ll have a standout intern who excels and maybe even teaches you a thing or two.
Nevertheless, for managers, mentoring interns can also serve as a way to broaden your reach and perspective during each workday. However, their work must always be checked closely. If it isn’t, then you could find yourself in an embarrassing predicament. And when that happens, the LAST thing one can ever say is that it was “the intern’s fault.” That’s unacceptable, even if it was the intern’s fault. Similarly, a journalist should avoid ever putting themselves in a position of having to explain the errors of their AI chatbot intern, which hallucinates unpredictably and often quite convincingly. The dog didn’t eat your homework. It’s not the intern’s fault. And blaming AI for your decision not to check facts and do the writing yourself is simply not good enough.
Finally, you don’t have to take my word for it. Take a look at the Hughes Hallucination Evaluation Model (HHEM) above, which gauges which models hallucinate (spoiler alert: all of them), and how often. Trust AI chatbots at your own peril. Your AI intern means well, and writes the King’s English with persuasive aplomb. But are you ready to bet your career on it? ✍︎
New essays on AI, film, & entertainment innovation arrive by email. Subscribe to stay in the loop. All editorial text is written by humans.
Cover: Modified scene from the TV series ‘Ripley’ via Netflix/YouTube

