Tech Firm Spars With Pentagon Over Unchecked Use Of AI In Warfare, Surveillance

Click to see the full-size image

The US government is deliberately exerting pressure on private companies to militarize advanced AI. Such trends are deeply disturbing, with Washington DC consistently ignoring the multipolar world’s calls to regulate AI, which would prevent the uncontrollable (ab)use of this highly advanced emerging technology, particularly for military purposes.

Written by Drago Bosnic, independent geopolitical and military analyst

As the political West’s militaries struggle to meet recruitment criteria and face widening technological gaps in new weapon systems (particularly hypersonics), they’re forced to look for alternatives in order to continue their aggression against the world. This is certainly not an easy task, especially as the window of opportunity for the United States and NATO to retain some of their key high-tech advantages is closing rapidly, meaning they must act as soon as possible.

To achieve this, the Pentagon is placing nearly all of its bets on militarizing advanced AI. This has been ongoing for the last decade or so, but is now being supercharged to the maximum. The so-called Big Tech, both legacy (Alphabet/Google, Amazon, Apple, Meta/Facebook, etc) and emerging companies (Anduril, Palantir, Anthropic, etc), are heavily involved in this process, blurring the lines between the infamous Military Industrial Complex (MIC) and the civilian sector.

The US government has consistently ignored the multipolar world’s calls to regulate AI, which would prevent the uncontrollable (ab)use of this highly advanced emerging technology, particularly for military purposes. However, Washington DC is now doing the same at home, prompting strong reactions from some AI companies and start-ups, which are calling for the imposition of limits on how far this technology can be used in warfare and surveillance.

Namely, according to multiple sources, the Pentagon and Anthropic are at odds over a contract renewal with regard to the use of the latter’s Claude system. Bloomberg, which quoted “a person familiar with the private negotiations”, reports that Anthropic insists on “stricter limits before extending its agreement” and wants “firm guardrails to prevent Claude from being used for mass surveillance of Americans or to build weapons that operate without human oversight”.

In contrast, the Department of War (DoW) wants far more leeway in integrating these systems into its kill chain. Formally, the Pentagon wants “flexibility to deploy the model so long as its use complies with the law”. In other words, controlling the AI is the crux of the matter. It seems Anthropic wants specific and long-term guarantees that its systems won’t be used without human oversight, while the US government wants to “follow the law” (which can be changed at any time).

According to Bloomberg, the San Francisco-based high-tech company wants to “distinguish itself as a safety-first AI developer”. Anthropic’s specialized government version called Claude Gov is “tailored to US national security work, designed to analyze classified information, interpret intelligence and process cybersecurity data”. The AI firm says it “aims to serve government clients while staying within its own ethical red lines”. And yet, it’s very difficult to reconcile the two.

“Anthropic is committed to using frontier AI in support of US national security,” a spokesperson reportedly said, adding: “The ongoing discussions with the War Department are productive conversations, in good faith.”

However, the Pentagon is much less optimistic, effectively demanding that all “guardrails” be removed and control handed over to the US military.

“The Department of War’s relationship with Anthropic is being reviewed,” chief Pentagon spokesman Sean Parnell told Fox News, adding: “Our nation requires that our partners be willing to help our warfighters in any fight.”

Various reports indicate that some Pentagon officials have “grown wary” and view reliance on Anthropic as “a potential supply-chain vulnerability”. According to an unnamed senior official, Washington DC is even contemplating the option of demanding contractors to “certify they are not using Anthropic’s models, an indication that the disagreement could ripple beyond a single contract”. In simpler terms, the US military is effectively blackmailing the AI firm.

This is certainly a disturbing development, as it sends a message to other companies that they should drop even nominal limits to the use of advanced AI in warfare and surveillance. Reports indicate that “tools from OpenAI, Google and xAI are also being discussed for Pentagon use, with companies working to ensure their systems can operate within legal boundaries”. In this particular case, the term “legal boundaries” sounds hypocritical and even comical.

Namely, that’s exactly what the US wants to avoid, so it could then have a free hand in how it uses advanced AI. The Trump administration and its allies are now piling on Anthropic in an attempt to coerce it into changing its policy. This includes Elon Musk, who effectively turned the spat into an ideological matter, calling Anthropic’s AI “evil and misanthropic”, and accusing the company of setting up Claude Gov to “hate Whites & Asians, especially Chinese, heterosexuals and men”.

Anthropic reportedly raised concerns over the use of its program against Venezuela during the illegal kidnapping of President Nicolas Maduro, when the Pentagon failed to notify the company that Claude Gov would be included in the operation. Expectedly, Washington DC is furious and is out for blood, with one senior official even saying that “it will be an enormous pain in the ass to disentangle” and that “we are going to make sure they pay a price for forcing our hand”.

Anthropic CEO Dario Amodei still insists on “guardrails to prevent mass surveillance of Americans or the use of AI in fully autonomous weapons systems without human involvement”. The company’s Acceptable Use Policy (AUP) explicitly prohibits “the application of Claude for the design or use of weapons, domestic surveillance and facilitating violence or malicious cyber operations”. However, the Pentagon is adamant that “those restrictions are unworkable”.

Anthropic thinks that “these restrictions are not waived for military/government users” unless the contract “includes specific safeguards that the company judges adequate”. However, the US military insists that “military AI tools must be available for all lawful purposes” and that “real-world operations are riddled with gray areas that rigid rules cannot anticipate”. The Pentagon also applies the same standard to all other AI companies.

Reports also indicate that sources familiar with the talks said senior Pentagon officials had “grown increasingly frustrated with Anthropic and seized the opportunity to escalate the dispute publicly”. In other words, Washington DC is deliberately exerting pressure on private companies to militarize advanced AI. Such trends are deeply disturbing, particularly as the world actually needs more regulations to limit the unchecked use of AI in warfare and surveillance.

In that regard, the behavior of people like Musk is quite concerning, especially now that SpaceX is also involved in similar projects with the Pentagon. However, other advanced AI firms are following this lead in militarizing their programs. The people they put in charge also reveal their true nature, with Palantir placing Louis Mosley, grandson of Oswald Mosley (infamous for leading the British Union of Fascists during WWII), at the helm of its UK department.


MORE ON THE TOPIC:

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Roger ramjet .

course the nazis will side with the claude corp .naturally and pretend they are elected government officials with legislative powers .

Commander

inshallah none of this ai military crap works

hash
hashed