AI Made Friendly HERE

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’

In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control.

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?

AI companies and regulation

Last week’s events come at what was already a worrying time for AI ethics. The Trump administration last year banned states from regulating AI, claiming that it threatens innovation.

Meanwhile, many AI companies have aligned themselves with the administration, with executives including OpenAI boss Sam Altman making million-dollar donations to Trump’s inauguration fund. (Altman noted at the time that he has also donated to Democratic politicians.)

Anthropic has been less effusive, working on national security while warning that AI can sometimes undermine democracy and that current systems are not reliable enough to power fully autonomous weapons.

An emerging international consensus

Much of the concern around military applications of AI has focused on lethal autonomous weapons systems. These are devices and software which can choose targets and attack them without human intervention.

Just a few years ago, an international consensus about the risks of these weapons seemed to be emerging among governments and technology companies.

In February 2020 the US Department of Defense announced principles for the use of AI across the entire organisation: it needed to be responsible, equitable, traceable, reliable and governable.

Likewise, in 2021 NATO formulated similar principles, as did the United Kingdom in 2022.

The US plays a unique leading role among its international allies in shaping global norms around military conduct. These principles signalled to countries such as Russia, China, Brazil and India how the US and its allies believed military use of AI should be governed.

Military AI and private enterprise

Military AI has relied extensively on partnerships with private industry, as the most advanced technology has been developed by private companies.

Project Maven, which set out in 2017 to increase the use of machine learning and data integration in US military intelligence, relied heavily on commercial tech companies.

The US Defense Innovation Board noted in 2019 that in AI the key data, knowledge and personnel are all in the private sector.

This is still the case today. However, the norms around how AI should be used are shifting rapidly, both in government and in much of the industry.

Trump and Silicon Valley

When Trump was re-elected in 2024, many in Silicon Valley welcomed the prospect of less regulation. Billionaire venture capitalist Marc Andreessen, author of The Techno-Optimist Manifesto, claimed Trump’s victory “felt like a boot off the throat”.

Joe Lonsdale, cofounder of AI-powered data analytics company Palantir, has been another vocal Trump backer. OpenAI president and cofounder Greg Brockman personally gave US$25 million to a Trump-supporting organisation last year.

We are a long way from the days of 2019 and 2020.

AI ethics assumes democratic norms

The question of whether an AI-enabled system is ethical or not is often seen as a question about the technology itself, rather than how it is used.

In this view, with the right design you can make an inherently ethical AI system. This often includes “algorithmic transparency” – being clear and honest about the rules the system uses to make decisions. The idea here is that ethics can be “baked in” to these rules.

The idea of ethical military AI also assumes it is operating under democratic principles. The idea behind algorithmic transparency is that “the people” should know how these systems work, because “the people” ultimately hold power in a democracy.

However, in an autocratic regime it doesn’t matter how transparent the algorithms are. There is no sense that civilians have a stake, and deserve to know what their government is doing, that its activities are in accordance with the law.

Free and public discussion is often seen as a key feature of liberal democracies. While eventual consensus may be valued, constructive disagreement and even conflict can be signs of a healthy democracy.

Decisions and consequences

In this light, Anthropic’s desire to have genuine discussions with the government about ethical red lines is an example of democratic practice in action. The company signalled both a desire for reasoned communication and the value of constructive disagreement.

In return, the Trump administration on Friday labelled Anthropic a “supply chain risk”, a rare designation previously only given to foreign companies, with secretary of defense Pete Hegseth writing that

effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

Anthropic plans to challenge the declaration in court, as it may have profound economic and reputational consequences for the company.

Meanwhile, OpenAI has largely conceded that it will have no ethical limits, only legal ones. As a result, it is open for business with the US government – but faces reputational consequences of its own as consumer backlash mounts.

AI in a world without democratic norms

What does it all mean for ethical AI in the military? One hard-to-avoid conclusion is that if we want military AI to be used in an ethical way – following transparent rules and laws – we need strong democratic norms, which are in peril as the rules-based international order crumbles.

So far, little has changed in practice. Mere hours after Trump’s denunciation of Anthropic, the US launched strikes on Iran – reportedly planned with the aid of the company’s software.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird