AI Made Friendly HERE

The Sacramento Bee’s use of AI leads to protest amongst reporters

More than 30 of the 40 journalists at the Sacramento Bee are taking a stance on the use of generative artificial intelligence and its use in generating news content.

Reporters in the Sacramento Bee News Guild, a union of The Sacramento Bee’s reporters, have been withholding their bylines from stories generated by their new Content Scaling Agent, or CSA, that’s powered by artificial intelligence. 

The new tool, developed by the Bee’s parent company McClatchy Media, is designed to increase the volume of content the Bee produces, according to the Sac Bee News Guild’s Vice Chair, Ariane Lange.

“So if you wrote a story where the headline was about, for example, a change in the Sacramento City Unified School District, it might generate a slightly different version of the story, where the headline and the angle was directed more towards an audience of parents,” Lange said. 

Lange has been working as an investigative reporter at the Sacramento Bee since 2021.  She said that while she may gear stories toward specific audiences, she doesn’t feel comfortable with an AI tool taking her story and generating articles where that angle is shifted.

“There’s something icky about taking a story that’s already published, potentially, and making a watered-down version geared toward gaining clicks from a specific audience,” Lange said. “It feels a little exploitative.”

Stories generated by AI have been published in the Bee, but they are marked as such, saying they were based on original work by a reporter. Lange said that they want to use the bylines of reporters to boost the CSA’s credibility.

“They know the public trusts the reporters,” Lange said. “They’re banking on using our credibility as reporters to shore up the credibility of this AI tool that reporters of the Sacramento Bee do not believe in.”

Every news story, including generated ones, needs to be edited by human editors as is required by the Bee’s AI policy.

Lange said that one of the Guild’s biggest concerns is the quality of the whole paper, not just AI generated content.

“It’s going to affect the quality of the stories we care most about, because our editors will be even more swamped than they are now, because they’ll be dealing with these new AI generated stories,” Lange said. “We know that this initiative places quantity over quality, and in fact, prioritizes quantity to the detriment of quality.”

Lange said that the tool is “sandboxed,” meaning it’s not supposed to pull from content outside of the URL of the story it’s given. Chris Fusco, the Executive Editor at the Sacramento Bee, declined to comment on how the tool was built, outside of what’s stated in their AI policy. 

The headline of a recent Sacramento Bee story that was created using generative AI and edited by a Sacramento Bee editor. There is no reporter byline.Screenshot of Sacramento Bee website

“Our policy states that we don’t mislead readers, and we have built that transparency into our use of bots and other automated technology in our reporting and in our products,” the policy reads.

While the policy doesn’t state how these tools were built or trained, it states that they collaborated with technologists to set parameters for story templates.

“For those familiar with the ability of certain AI to “hallucinate,” or make up facts, rest assured: Our automated content does not use that type of technology.”

When using generative AI, especially with large language models, they can produce responses to queries that are stated as if they’re fact, in order to fulfill the prompt given when there are gaps in source information.

While McClatchy’s model might not hallucinate, Lange said that in the early stages of its rollout, it has already made mistakes.

“We already know that it does generate errors, which we know because reporters have already been strong-armed into editing stories based on their work,” Lange said. “My understanding is that it wouldn’t hallucinate as much as say, a ChatGPT, which is pulling information from the wide open internet. But it’s not a person, and it doesn’t know fact from fiction.” 

McClatchy Media did not respond to requests for comment.

AI and journalism: the ethics of generation

CalMatters has been developing its AI tools for journalism for over a decade. Their project Digital Democracy helps track bills and the legislative bodies they move through.

According to Neil Chase, the CEO of CalMatters, they’ve used AI to cover things no journalist is paying attention to. 

“There are generative AI tools being used to create, you know, massive amounts of disinformation, which is terrible for journalism. There are also uses of generative AI to create journalism that otherwise wouldn’t exist,” Chase said. “What we’re doing at CalMatters specifically is we’re not using generative AI, but AI tools that we’ve created to go through data about the state legislature.”

Chase said they use Digital Democracy to comb through data and look for story ideas: inconsistencies, different kinds of behaviors. The things that journalists might see in person covering the Capitol.

“We find those things and we call them tips. They are ideas like a scribble in a reporter’s notebook that they want to follow up on later,” Chase said. “We give those to reporters all over the state, not just at CalMatters.”

Chase said that CalMatters doesn’t generate any of their articles and that’s mostly due to the kind of in-depth reporting that CalMatters does. Especially when resources are limited, AI can help reporters cover things they otherwise wouldn’t be able to. He gave an example of small-town newsrooms building a tool to help monitor police scanners and tell them if something important happened that they missed. 

“That seems like a useful tool,” Chase said. “Very different from publishing something final that is going to determine whether your audience trusts you or not.”

Chase said he can see a future for AI in journalism, and that he’s been astounded by the progress AI has made in recent years.

“I don’t think we can just blindly use these tools,” Chase said. “But I also don’t think we can sit here and say, well, AI could never make an ethical decision or AI will never be able to do this or never able to do that.”

But right now, according to Djordje Padejski, a professor who teaches about AI and journalism at Stanford and the Walter Cronkite School of Journalism at Arizona State University, the technology just isn’t there. 

“Can AI make ethical decisions that could be applied in journalism? My short answer is really no,” Padejski said. “Not in the way [that] journalism requires.”

According to Padejski, language models are designed to calculate the statistically most probable next word in a sentence, and while this can make the model seem realistic and accurate, that’s not always the case.

“They optimize for probability, they optimize for coherence, for pattern recognition,” Padejski said. “They don’t optimize for truth. They’re not designed to understand harms or public interest.”

He said that the threshold for AI accuracy rates is just not on par with what’s required for good journalism.

“For computer sciences, accuracy rate over 80% is a fascinating, celebrated achievement,” Padejski said. “Journalism does not tolerate 80% accuracy … Journalism is a verification-based, evidence-based, document-based profession. AI systems have a statistical-pattern-based, synthetic approach.”

Padejski said that even when humans write summarizations of other reporters’ stories, important points can get distorted or misinterpreted. 

In Lange’s work, this is one of the most concerning problems she’d have to contend with if she had to use the Bee’s content scaling agent. 

At the Bee, she’s an investigative reporter covering traffic deaths. Since 2024, she’s covered and documented every single person who has died in traffic collisions on city streets.

When these deaths happen, she’s often a stranger talking to people on the worst day of their lives.

“I will do my best to honor their loved one’s life and to make sure that public officials are held to account for not doing more to prevent traffic deaths,” Lange said. “But my employer might feed that story to a chat bot. That’s revolting.”

Follow us for more stories like this

CapRadio provides a trusted source of news because of you.  As a nonprofit organization, donations from people like you sustain the journalism that allows us to discover stories that are important to our audience. If you believe in what we do and support our mission, please donate today.

Donate Today

Originally Appeared Here

You May Also Like

About the Author:

Early Bird