Whitbrook — a deputy editor at Gizmodo who writes and edits articles about science fiction — quickly read the story, which he said he had not asked for or seen before it was published. He catalogued 18 “concerns, corrections and comments” about the story in an email to Gizmodo’s editor in chief, Dan Ackerman, noting the bot put the Star Wars TV series “Star Wars: The Clone Wars” in the wrong order, omitted any mention of television shows such as “Star Wars: Andor” and the 2008 film also entitled “Star Wars: The Clone Wars,” inaccurately formatted movie titles and the story’s headline, had repetitive descriptions, and contained no “explicit disclaimer” that it was written by AI except for the “Gizmodo Bot” byline.
The article quickly prompted an outcry among staffers who complained in the company’s internal Slack messaging system that the error-riddled story was “actively hurting our reputations and credibility,” showed “zero respect” for journalists and should be deleted immediately, according to messages obtained by The Washington Post. The story was written using a combination of Google Bard and ChatGPT, according to a G/O Media staff member familiar with the matter. (G/O Media owns several digital media sites including Gizmodo, Deadspin, The Root, Jezebel and The Onion.)
“I have never had to deal with this basic level of incompetence with any of the colleagues that I have ever worked with,” Whitbrook said in an interview. “If these AI [chatbots] can’t even do something as basic as put a Star Wars movie in order one after the other, I don’t think you can trust it to [report] any kind of accurate information.”
The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable. On June 29, Merrill Brown, the editorial director of G/O Media, had cited the organization’s editorial mission as a reason to embrace AI. Because G/O Media owns several sites that cover technology, he wrote, it has a responsibility to “do all we can to develop AI initiatives relatively early in the evolution of the technology.”
“These features aren’t replacing work currently being done by writers and editors,” Brown said in announcing to staffers that the company would roll out a trial to test “our editorial and technological thinking about use of AI.” “There will be errors, and they’ll be corrected as swiftly as possible,” he promised.
Gizmodo’s error-plagued test speaks to a larger debate about the role of AI in the news. Several reporters and editors said they don’t trust chatbots to create well-reported and thoroughly fact-checked articles. They fear business leaders want to thrust the technology into newsrooms with insufficient caution. When trials go poorly, it ruins employee morale as well as the reputation of the outlet, they argue.
Artificial intelligence experts said many large language models still have technological deficiencies that make them an untrustworthy source for journalism unless humans are deeply involved in the process. Left unchecked, they said, artificially generated news stories could spread disinformation, sow political discord and significantly impact media organizations.
“The danger is to the trustworthiness of the news organization,” said Nick Diakopoulos, an associate professor of communication studies and computer science at Northwestern University. “If you’re going to publish content that is inaccurate, then I think that’s probably going to be a credibility hit to you over time.”
Mark Neschis, a G/O Media spokesman, said the company would be “derelict” if it did not experiment with AI. “We think the AI trial has been successful,” he said in a statement. “In no way do we plan to reduce editorial headcount because of AI activities.” He added: “We are not trying to hide behind anything, we just want to get this right. To do this, we have to accept trial and error.”
In a Slack message reviewed by The Post, Brown told disgruntled employees Thursday that the company is “eager to thoughtfully gather and act on feedback.” “There will be better stories, ideas, data projects and lists that will come forward as we wrestle with the best ways to use the technology,” he said. The note drew 16 thumbs down emoji, 11 wastebasket emoji, six clown emoji, two face palm emoji and two poop emoji, according to screenshots of the Slack conversation.
News media organizations are wrestling with how to use AI chatbots, which can now craft essays, poems and stories often indistinguishable from human-made content. Several media sites that have tried using AI in newsgathering and writing have suffered high-profile disasters. G/O Media seems undeterred.
Earlier this week, Lea Goldman, the deputy editorial director at G/O Media, notified employees on Slack that the company had “commenced limited testing” of AI-generated stories on four of its sites, including A.V. Club, Deadspin, Gizmodo and The Takeout, according to messages The Post viewed. “You may spot errors. You may have issues with tone and/or style,” Goldman wrote. “I am aware you object to this writ large and that your respective unions have already and will continue to weigh in with objections and other issues.”
Employees quickly messaged back with concern and skepticism. “None of our job descriptions include editing or reviewing AI-produced content,” one employee said. “If you wanted an article on the order of the Star Wars movies you … could’ve just asked,” said another. “AI is a solution looking for a problem,” a worker said. “We have talented writers who know what we’re doing. So effectively all you’re doing is wasting everyone’s time.”
Several AI-generated articles were spotted on the company’s sites, including the Star Wars story on Gizmodo’s io9 vertical, which covers topics related to science fiction. On its sports site Deadspin, an AI “Deadspin Bot” wrote a story on the 15 most valuable professional sports franchises with limited valuations of the teams and was corrected on July 6 with no indication of what was wrong. Its food site The Takeout had a “Takeout Bot” byline a story on “the most popular fast food chains in America based on sales” that provided no sales figures. On July 6, Gizmodo appended a correction to its Star Wars story noting that “the episodes’ rankings were incorrect” and had been fixed.
Gizmodo’s union released a statement on Twitter decrying the stories. “This is unethical and unacceptable,” they wrote. “If you see a byline ending in ‘Bot,’ don’t click it.” Readers who click on the Gizmodo Bot byline itself are told these “stories were produced with the help of an AI engine.”
Diakopoulos, of Northwestern University, said chatbots can produce articles that are of poor quality. The bots, which train on data from places like Wikipedia and Reddit and use that to help them to predict the next word that’s likely to come in a sentence, still have technical issues that make them hard to trust in reporting and writing, he said.
Chatbots are prone to sometimes make up facts, omit information, write language that skews into opinion, regurgitate racial and sexist content, poorly summarize information or completely fabricate quotes, he said.
News companies must have “editing in the loop,” if they are to use bots, he added, but said it can’t rest on one person, and there needs to be multiple reviews of the content to ensure it is accurate and adheres to the media company’s style of writing.
But the dangers are not only to the credibility of media organizations, news researchers said. Sites have also started using AI to create fabricated content, which could turbocharge the dissemination of misinformation and create political chaos.
The media watchdog NewsGuard said that at least 301 AI-generated news sites exist that operate with “no human oversight and publish articles written largely or entirely by bots,” and span 13 languages, including English, Arabic, Chinese and French. They create content that is sometimes false, such as celebrity death hoaxes or entirely fake events, researchers wrote.
Organizations are incentivized to use AI in generating content, NewsGuard analysts said, because ad-tech companies often put digital ads onto sites “without regard to the nature or quality” of the content, creating an economic incentive to use AI bots to churn out as many articles as possible for hosting ads.
Lauren Leffer, a Gizmodo reporter and member of the Writers Guild of America, East union, said this is a “very transparent” effort by G/O Media to get more ad revenue because AI can quickly create articles that generate search and click traffic and cost far less to produce than those by a human reporter.
She added the trial has demoralized reporters and editors who feel their concerns about the company’s AI strategy have gone unheard and are not valued by management. It isn’t that journalists don’t make mistakes on stories, she added, but a reporter has incentive to limit errors because they are held accountable for what they write — which doesn’t apply to chatbots.
Leffer also noted that as of Friday afternoon, the Star Wars story has gotten roughly 12,000 page views on Chartbeat, a tool that tracks news traffic. That pales in comparison to the nearly 300,000 page views a human-written story on NASA has generated in the past 24 hours, she said.
“If you want to run a company whose entire endeavor is to trick people into accidentally clicking on [content], then [AI] might be worth your time,” she said. “But if you want to run a media company, maybe trust your editorial staff to understand what readers want.”