AI Made Friendly HERE

Anthropic’s Break With the Pentagon Ignites AI Ethics Debate| National Catholic Register

Amid the explosion in recent years of artificial intelligence (AI), Catholics have consistently called for the inclusion of socially responsible safeguards, limits and ethical principles within the technology.

Now, a leading AI developer that is trying to do that has found itself in a major dispute with the U.S. government — stoking a heated debate over the ethical and moral dimensions of AI development.

Anthropic, a San Francisco startup, is the creator of Claude, a large language model (LLM)-based AI assistant that has already enjoyed wide adoption across many sectors of U.S. society, including thousands of businesses and schools. Founded in part by defectors from industry juggernaut OpenAI, Anthropic has positioned itself as the safe, responsible option in the AI ecosystem; its CEO, Dario Amodei, often gives interviews advocating for the development of “guardrails” to protect humanity from unchecked AI.

The U.S. government, meanwhile, has since last year been exploring the use of AI in national defense. Four major U.S. AI companies — Google, xAI, OpenAI and Anthropic — have been working with the Pentagon to varying degrees, with highly lucrative contracts on the table.

In part because of the relatively cautious approach on the part of its developers, Claude was the first AI product allowed onto the Pentagon’s classified networks. As a tool for the U.S. military, Claude’s analytic capabilities have reportedly been used to support numerous recent high-profile military operations, including the capture of Venezuelan President Nicolás Maduro and the still-unfolding war in Iran.

Anthropic had been in talks with the Pentagon as part of a $200 million contract negotiation that would have expanded Anthropic’s products throughout the U.S. defense apparatus. But the talks fell apart in late February in dramatic fashion.

Pete Hegseth, the defense secretary, pushed in public statements and in direct negotiations for Anthropic to allow the government to put its AI technology to work for “any lawful use” — including fully autonomous weapons systems and the mass surveillance of U.S. citizens.

When Anthropic refused to “in good conscience” allow those uses of its product, the government took the unprecedented step of designating Anthropic as a “supply chain risk” — a first for an American company — and directing all government agencies to halt the use of Anthropic within six months.

As for the massive Pentagon contract, Anthropic’s rival OpenAI — creator of ChatGPT — has reportedly already swooped in to ink its own deal with the government, without the same kinds of safeguards that Anthropic sought. A subsequent outpouring of goodwill for Anthropic’s principled stand has emerged online, with many on social media proclaiming that they intend to delete ChatGPT in favor of Claude, which shot to the top of the Apple App Store.

Though arguably noble, Anthropic’s stand may prove fruitless. Barring a court order, the government’s newly-launched vendetta against Anthropic could well lead to the company’s financial annihilation: The government’s “blacklist” designation precludes any contractor that works with the Pentagon from also doing business with Anthropic.

But experts in AI ethics nevertheless expressed appreciation for the path Anthropic has chosen to tread, which in some respects aligns with the moral entreaties of the Vatican under Popes Francis and Leo.

“For a long time, we have kind of pushed the ethical and moral considerations about AI to the back burner. … And now, all of a sudden, it’s right in our face. It’s the most important issue right now,” Father Philip Larrey, a professor of philosophy at Boston College who teaches AI ethics, told the Register.

“It’s time that we start putting the priorities where they belong: what are the beneficial and moral uses we can make of these technologies? That [question’s] finally capturing headlines around the world.”

 

‘Grave Ethical Concern’

Pope Leo XIV, leading the Church’s engagement with AI begun under his predecessor Pope Francis, has consistently called for AI to be used in ways that prioritize human flourishing and the common good, and for AI development that is transparent and socially responsible.

On the two ethical quandaries specifically at play in the Anthropic case — namely autonomous weapons systems and AI-enabled mass surveillance — Catholics can be assured of clear guidance.

In recent years, the Vatican has repeatedly and forcefully expressed opposition to the idea of empowering computerized weapons systems to operate independently and select targets for destruction, which includes the use of AI to target and engage enemies without direct human intervention. Such systems are known as lethal autonomous weapons systems (LAWS), but are known popularly — and provocatively — as “killer robots.”

The Dicastery for the Doctrine of the Faith’s early 2025 document Antiqua et Nova, an expansive note on artificial intelligence, further declares LAWS a “cause for grave ethical concern” due to their lack of “the unique human capacity for moral judgment and ethical decision-making.” Pope Francis succinctly summarized the Church’s position in 2024: “No machine should ever choose to take the life of a human being.”

Antiqua et Nova also provides a Catholic perspective on the issue of mass surveillance. Already in use in countries like China as a means of controlling the populace, including the nation’s Christians, AI-powered surveillance technologies are also already broadly in use across the United States. Private companies like Flock Safety, using AI-parsed data from hundreds of thousands of internet-connected cameras in some U.S. cities, are providing vast amounts of information to law enforcement.

In the face of this, the Vatican document says AI used for surveillance “aimed at exploiting, restricting others’ freedom, or benefitting a few at the expense of the many is unjustifiable.” Such systems, when deployed in a way that disrespects the dignity and freedom of every person, reduce people’s lives to “a kind of spectacle to be examined and inspected.”

Father Larrey said that, in his opinion, Anthropic “did good” by refusing to compromise on those two principles. It remains to be seen whether the Vatican’s warnings on AI for surveillance and autonomous weaponry will be heeded at the highest levels of world power, including the U.S. government.

“I still believe AI can be used for good, and I think it will be,” Father Larrey commented.

 

A Grand but Futile Gesture?

Brian Patrick Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, a Catholic institution, was given the opportunity to provide feedback on Anthropic’s Constitution, a living document the company maintains that provides a detailed description of Claude’s behavior and the “safe and beneficial” values the company says the AI is guided by. Anthropic released the latest iteration of the Constitution in January.

“[Anthropic] found the challenges that they were running into significant enough that they looked for outside help. And they saw the Catholic Church as an organization that could help in that regard, so they reached out specifically to Catholic people, people who would be familiar with the tradition of moral formation, [asking us:] How do you actually help develop ethical thinking?” Green told the Register in an interview.

“I think of all the AI organizations, they’re the ones who take ethics most seriously,” he said.

That said, it does not appear that Anthropic’s principled stand was explicitly inspired by a Catholic worldview. Anthropic’s CEO has said, for example, that they are not “categorically against fully autonomous weapons,” but rather believe that the technology is not yet reliable enough to be used effectively and must be deployed “with proper guardrails, which don’t exist today.”

Green knows several of the Anthropic creators personally. Given that most large companies will find ways to compromise ethics for the sake of money, Green said he appreciates the fact that, as an organization, Anthropic is at least trying to do the right thing — and likely will suffer financially, and perhaps existentially, as a result.

“You can imagine an alternate universe where Dario Amodei just said, ‘Okay, we’ll sign it. It’s no big deal.’ They would be doing fine as a business, and the rest of the world would not be talking about AI ethics right now. [But] this universe that we’re living in is one that has been fundamentally changed in a lot of ways because somebody decided to take an ethical stand. I think that’s important,” Green said.

“I think this ethical stand is good, potentially — assuming that the government does not actually destroy Anthropic and reduce their value to zero,” he added.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird