AI Made Friendly HERE

Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute| National Catholic Register

A federal judge says the government likely violated constitutional protections in penalizing Anthropic after the AI firm refused to support autonomous weapons and mass surveillance.

A federal judge has temporarily blocked the Department of Defense from labeling American artificial intelligence (AI) company Anthropic a “supply chain risk,” a designation the Pentagon gave the company after Anthropic refused to allow the military to use its products for autonomous weaponry and mass surveillance.

The case has drawn interest from prominent Catholics due to the relative novelty of a major AI developer taking a stand in favor of ethical and socially responsible safeguards around the technology in the face of government coercion.

In a March 26 ruling, which is not a final decision in the case, Judge Rita Lin of the U.S. District Court for the Northern District of California said Anthropic has a high likelihood of ultimately winning its case and proving that the government’s “supply chain risk” designation violated, among other laws, the First and Fifth Amendments.

Anthropic, creator of the widely adopted AI assistant Claude and a company founded with an eye to responsible AI development, had been in talks with the Pentagon earlier this year as part of a $200 million contract negotiation that would have expanded the military’s use of Anthropic’s products, which have already widely permeated the U.S. defense apparatus, including classified networks.

When Pete Hegseth, the defense secretary, pushed in public statements and in direct negotiations for Anthropic to allow the government to put its AI technology to work for “any lawful use” — including fully autonomous weapons systems and the mass surveillance of U.S. citizens — Anthropic in late February said it refused “in good conscience” to allow such uses of its product, saying the tech is not yet sufficiently reliable and safe for those kinds of uses.

In retaliation, the Pentagon gave the lucrative contract instead to rival AI developer OpenAI, and Hegseth moved on March 3 to designate Anthropic a “supply chain risk” — a first for an American company — directing all government agencies to halt the use of Anthropic within six months. The designation would also force anyone who wants to do business with the U.S. military to sever any commercial relationship with Anthropic.

Fearing irreparable harm to its business, Anthropic on March 9 filed two lawsuits against the Department of Defense to block the Pentagon’s actions. A number of other players in the AI space, including Microsoft, filed amicus curiae (“friend of the court”) briefs in the case supporting Anthropic.

The government is free to use whatever AI product it chooses, but it is not lawful for the government to push measures that “appear designed to punish” and “cripple” Anthropic, which had heretofore been granted by the government the restrictions it sought, with no issues raised by the Pentagon until the recent contract negotiations, the judge wrote.

Moreover, punishing Anthropic for going public with the government’s demands is “classic illegal First Amendment retaliation” and introduces “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

It seems clear, the judge went on, that the government is attempting to make an example out of Anthropic, potentially chilling future discussions around AI ethics and safety.

“At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them. Numerous amici have also described wide-ranging harm to the public interest, including the chilling of open discussion about important topics in AI safety,” Lin wrote.

The Pentagon has seven days to appeal Lin’s ruling, after which it will take effect.

Many observers, including Catholics, have expressed appreciation for Anthropic’s decision to make a principled stand against the government’s demands. A group of 14 Catholic moral theologians and ethicists had filed an amicus brief in the case stating that the teaching of the Catholic Church supports Anthropic’s decision to reject the Pentagon’s demands.

“Anthropic, in the red lines it has drawn for the use of its products on domestic mass surveillance and autonomous weapons systems, sought to uphold minimal standards of ethical conduct for technical progress. In doing so, Anthropic was acting as a responsible and moral corporate citizen, not as a threat to the safety of the American supply chain,” the authors of the brief wrote.

The Catholic authors of the brief stress, however, that the Church’s reasons for opposing autonomous weapons and mass surveillance are not merely pragmatic. Any use of weapons capable of making wartime decisions on their own without human input violates the Catholic principle of “just war,” and a widespread surveillance regime by the military would undermine the dignity of those being surveilled, the scholars wrote.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird