ARTIFICIAL INTELLIGENCE is already in the newsroom.
Smart transcription software helps reporters go through their notes faster and review meetings they didn’t have a chance to attend in person, large language models like ChatGPT might summarize notes or draft memos and emails, and organizations are increasingly leaning on AI tools to create graphics. Some are experimenting with AI-written articles, to benign, awkward, or institutionally dire results.
Even the most advanced AI still isn’t reliable enough for most ethical newsrooms to just wind it up and let it go – and recent scandals at Sports Illustrated and Gannett highlight the dangers of doing so. But it is becoming a more common tool.
“The human hand is the most important part of this AI-assisted journalism,” said Sarah Scire, deputy editor of the Nieman Journalism Lab at Harvard, on The Codcast. “There are some quarters of the news industry that are very anti-AI. I don’t think at Nieman Lab that we think of it as being a hundred percent bad. There’s many ways it can be super useful for journalists, especially resource-strapped journalists who are dealing with newsrooms that are having layoffs or redistributing their resources within the newsroom.”
Nieman Lab has been publishing a slew of predictions for journalism in 2024, with a good handful contemplating the role of AI in the field. Some expect a flood of middling AI-generated content, some see the tide rising and argue that newsrooms need to start making use of this new tool especially in small newsrooms.
Risks at this point are well-known. AI software still often creates what are called “hallucinations,” or fake facts, like claiming a legal case exists when it doesn’t or falsely attributing a statement to someone who never said it.
Gordon Crovitz, former publisher of The Wall Street Journal and cofounder of NewsGuard, says AI must become more accurate, or it will have little use to newsrooms or government bodies increasingly interested in its adoption.
“One thing that I think is insidious about the way that these models can hallucinate is that they do something called creating a ‘plausible looking’ source,” Scire said. Neiman, WIRED, and NewsGuard are some of the organizations to test the level of fabrication that comes out of the large language models, which are trained using the informative but also misinformation-rich internet.
Journalists who ask, for instance, for AI to offer up articles on a subject, might find that it will render a list of sources that look real, Scire said. “So it’ll say the Baltimore Sun, it’ll say the New York Times, it’ll say the LA Times, but then when you go and click the link that they provide, you get a 404 – you get an error code. So having that human who’s willing to go and click on those actual links and double check, I think, is the most essential part of this AI-assisted journalism. Without it, I think that you’re getting something that looks plausible, but that often does not have any basis in reality or accuracy.”
Scire wrote this month about one of the first academic research papers examining public perception of AI in news production – the working paper “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news” by the University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon.
Readers want publications to label AI-generated or AI-assisted content, the paper found, but they also think less of the publications once they know AI is being used. It creates an odd incentive.
“Some of the high quality publishers, who are most likely to disclose when they’re using AI, are gonna feel the brunt of this judgment from readers,” Scire said. “While maybe some of these low quality sites, or pieces of reporting that don’t disclose that they’re being AI-generated, won’t have that same judgment against them.”
At this stage, AI-generated content may be able to churn out a workable email or a B-minus student essay – and yes, journalistic organizations are grappling with the plagiarism and copyright issues that can come from AI scrubbing existing articles to produce new ones – but Scire says the human skillset is still a newsroom’s main asset.
“I think that the problem with these AI-generated articles is that the writing is bad and the reporting isn’t accurate, and those are two pretty critical things for journalists and for journalism,” she said. “And a lot of folks who are interested in the future of journalism don’t think that this is necessarily the best way long-term to gain success, especially as we see the rise of subscription models – ones that rely not on ads and ad clicks, but on folks believing that this is a reliable spot to get news and excellent writing. I think we’re far way away from AI being able to write better than humans, ’cause the content right now is dull, it’s unoriginal, and it’s more often than not wrong in ways that can be hard to detect both for the journalists who are using the technology and for the readers themselves.”

