
Billionaire newspaper owners are in the middle of a massive freakout about their Opinion sections during the second Trump administration.
Last week, Amazon CEO and Washington Post owner Jeff Bezos announced that the Post’s Opinion section would reorient itself “in support and defense of two pillars: personal liberties and free markets…Viewpoints opposing those pillars will be left to be published by others.” This followed Bezos’s late-October decision that the Post would stop endorsing presidential candidates.
And this week, Patrick Soon-Shiong, owner of The Los Angeles Times, announced that the paper has added AI-powered “alternative perspectives” — dubbed “Insights” — to much of its opinion content. (The Opinion section is also being rebranded as “Voices,” and anywhere in the paper that “a piece takes a stance or is written from a personal perspective, it may be labeled Voices.”)
“The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article,” Soon-Shiong wrote on Monday in a letter to readers.
Voices and Insights button now live @latimes. Try it. Now the voice and perspective from all sides can be heard, seen and read —no more echo chamber. Thanks to our talented content management software team running this Graphene platform pic.twitter.com/HslD5dKinz
— Dr. Pat Soon-Shiong (@DrPatSoonShiong) March 3, 2025
The hope, I think, is that the L.A. Times will appear more objective and unbiased if it presents both sides of an issue. A human writer will present one side of a debate, AI will present the other, and readers, seeing the multitude of perspectives represented, will trust the L.A. Times more and pay up. Or, politicians from across the political spectrum will recognize that the L.A. Times is treating them all fairly and never complain about coverage again. This will all “help heal our divided nation through a platform that enables civil discourse,” Soon-Shiong tweeted in November.
The reality is kind of a mess, I found this week after delving into the AI-generated “Insights” across L.A. Times content. Here’s some of what I saw:
“Insights” are kind of hidden…
A piece with “Insights” turned on has a little lightbulb icon toward the top. The insights themselves are at the very bottom of stories, right above the comments module, and you have to click to expand them.
…which may be a good thing.
The L.A. Times stresses that the AI analysis is “not created by the editorial staff of the Los Angeles Times” and that AI is “an experimental, evolving technology.” It’s unclear if the AI content is being reviewed by anyone before publication (Matt Hamilton, the vice-chair of the L.A. Times Guild, told The Guardian it’s “unvetted”), but stuff like this is already happening:
Earlier today the LA Times had AI-generated counterpoints to a column from @gustavoarellano.bsky.social. His piece argued that Anaheim, the city he grew up in, should not forget its KKK past.
The AI “well, actually”-ed the KKK. It has since been taken off the piece.
www.latimes.com/california/s…
[image or embed]
— Ryan Mac (@rmac.bsky.social) March 4, 2025 at 12:34 AM
Given that “Well, actually” is the entire point of this feature, examples like the above aren’t surprising. (I noticed that an op-ed published on Tuesday — “Imagine how disabled people can, rather than assuming we can’t” — did not have the AI feature enabled, which is probably a good thing.)
The AI’s sourcing and citations are strange.
The AI “Perspectives” are generated by the LLM search engine Perplexity, which has also signed deals with news publishers. One of Perplexity’s major selling points is that it cites its sources. But I found a bunch of oddities both in the sources it chooses to cite and in the way it cites them. Much of the sourcing wouldn’t pass scrutiny in a newsroom or in a classroom.
A March 1 op-ed is titled “Great documentaries reveal history’s truth. Unregulated AI threatens to distort it.” AI uses several sources that are clearly written from AI-skeptical viewpoints to make pro-AI arguments.
Take a look at that last pro-AI bullet point, “Regulation risks stifling innovation.” It cites two pieces: A World Economic Forum opinion piece titled “AI is finding its voice and that’s bad for democracy,” and a piece from The Hacker News titled “AI-powered deception is a menace to our societies.”
Those pieces do not argue that AI is over-regulated. They argue that AI is unpredictable and has the potential to be dangerous without a great deal of human intervention. The World Economic Forum piece only mentions Val Kilmer in this context: “Of course, computer-generated voices are not all bad. Stephen Hawking found a voice with which to reveal the universe, and Val Kilmer delivered lines in “Top Gun: Maverick” using AI trained on recordings made before he lost his voice to cancer. But Hawking and Kilmer sanctioned the voices and controlled what they said.”
The AI often draws on the same sources to both describe what is in the article and to provide differing viewpoints. Here’s an example from a February 27 op-ed, “If Trump aims to bring down prices, his policies will need to swerve.“
Those two sections both cite the same seven sources:
[1] “Elon University Poll: Higher Prices, economic disruptions expected as a result of tariff increases“
[2] Holland & Knight [law firm], “Trump’s 2025 Executive Orders“
[3] Federal Reserve Monetary Policy Report, March 2024
[4] Federal Reserve statement, 1/29/25
[5] Whitehouse.gov, “Fact Sheet: President Donald J. Trump launches massive 10-to-1 deregulation initiative“
[6] International Monetary Fund, “Monetary policy: Stabilizing prices and output“
[7] Trading Economics United States Fed Funds Interest Rate
Sara Platnick, a spokesperson for Perplexity, told me the sourcing “appears” to be the same “because it’s one list of all sources that inform both sections. While there may be some overlap, some citations are distinct depending on the specific article.” But in all the pieces I looked at, I found only a couple of distinct citations.
At any rate, whatever the AI is doing here is way less useful than what a human would do. A human assigned to come up with differing viewpoints on an issue would probably read a bunch of articles on the topic from sources and publications across the political spectrum, then present the various counterarguments from those sources. (This is Tangle News’ whole business model.) That’s not what we’re getting here.
Maybe this isn’t the point, but the AI isn’t directing you to stuff you’re going to want to read, or to anything particularly authoritative. It’s a lot of Wikipedia, Trump White House statements, something called Straight Arrow News, the pro-Turkish government newspaper Daily Sabah, a Greenwich high school’s student newspaper. If I were editing these, I’d be throwing “FIND A BETTER SOURCE” in the comments, over and over. The human-written op-eds themselves have more interesting and varied sourcing. (I started cataloging this all here, if you want to take a peek.)
I didn’t find hallucinated fake links or anything in my testing. But this product is mediocre and I’d be irritated to find it at the bottom of a piece I had written.
“Viewpoints,” analyzing where content falls on the political spectrum, are confusing.
Any “Voices” content that gets the AI treatment also gets an AI-generated label: Left, Center Left, Center, Center Right, or Right. The feature is powered by Particle, the AI news app launched last year by former Twitter employees. Particle itself says it determines outlets’ political leanings by using designations from AllSides, Ad Fontes Media, and Media Bias / Fact Check.
Here, though, Particle isn’t determining news outlets’ political leanings — the claim is that it’s determining where a piece itself falls on a political spectrum. This capability, which is new for Particle, “considers multiple dimensions at a sentence-by-sentence level,” according to a Medium post published Monday:
These are some of the dimensions considered:
Topic selection and framing: What is discussed and how it is contextualized
Policy positions: How things should be changed
Language and rhetoric: How things are expressed
Source treatment: Who is considered authoritative
Moral judgments: Why things matter
While the company says the new feature “produces short summaries of the composite ratings for pieces, as well as in-depth breakdowns that include evidence for each score,” those summaries aren’t being published on the L.A. Times’ site. As a result, it wasn’t clear to me why or how articles are rated the way they are.
Take this op-ed: “People like Andrew Tate have no place in the American conservative movement,” by Josh Hammer, senior editor-at-large for Newsweek. “The basic problem with some conservatives’ embrace of this man is that Andrew Tate is an abominable human being,” Hammer writes. The op-ed also criticizes the Trump administration’s links to Tate.
The AI says that this piece aligns with a “Right” point of view but doesn’t explain why. I’d originally guessed that was because Hammer is a conservative activist, but that information isn’t included anywhere in the piece itself. A Particle rep told me the tool “does NOT take the author’s general political leanings into account — the tool evaluates one piece of content on its own. This is meant to focus readers on the opinions within that particular piece, which is especially helpful when writers take stands on certain topics that do not align with their usual party or ideology.”