Anthropic has until Friday to accept the Pentagon's demands that it loosen restrictions on how the Defence Department uses its AI tools, it has been reported.
The company's chief executive Dario Amodei met Defence Secretary Pete Hegseth in Washington on Tuesday. There have been reports that Mr Hegseth was unhappy with Anthropic limiting how its AI products could be used for defence and military purposes.
Anthropic has several contracts with the US government.

The Wall Street Journal had previously reported that Anthropic’s AI offering, Claude, was used in the US mission that resulted in the capture of former Venezuelan leader Nicolas Maduro.
Technology expert Dean Ball, who worked on crafting President Donald Trump's AI Action Plan, expressed concern with what he called Mr Hegseth's escalatory approach to Anthropic.
"A mere contract cancellation is not what is being threatened by the government," Mr Ball posted on X. "Instead it is something broader: designation of Anthropic as a 'supply chain risk.' This is normally applied to foreign-adversary technology like Huawei."
Tech companies are coming under increasing scrutiny over how their products, particularly AI, are being used in conflicts.
After months of demonstrations and staff members resigning, Microsoft in September announced that it had disabled services to a unit in Israel's Ministry of Defence.
The move followed an investigation at the company that examined how its technology was being used by Israel's military in Gaza, and whether the use of those tools breached Microsoft's policies.
The Guardian reported that the company's technology was used to operate a powerful surveillance system that gathered millions of civilian phone calls in Gaza and the West Bank.
In a post to the Microsoft's blog in September, Brad Smith, vice chairman and president, said that the review process had “found evidence that supports elements” of those reports.
While some companies such as Microsoft and Anthropic have sought to place limits on how their AI products are used, others, like Palantir, have taken the opposite approach.
In a letter to shareholders, Palantir chief executive Alex Karp said he had no qualms with how his company's software is used in defence.
"We note only that our commitment to building software for the US military, to those whom we have asked to step into harm’s way, remains steadfast, when such a commitment is fashionable and convenient and when it is not," Mr Karp said.
While Anthropic does not have the name recognition of OpenAI or Google, the company has gained a loyal following with its Claude AI chatbot. It has attracted many customers with its coding models that have cut software development time down significantly and levelled the playing field for start-ups and novice developers.
Anthropic has also pushed a vision for AI that differs from that of its rivals. "Anthropic is a public benefit corporation dedicated to securing its benefits and mitigating its risks," says a message on its website.
In 2025, it went against the grain of many in the US technology industry by defending strict chip export policies that antagonised companies such as Nvidia and Microsoft, who felt the rules would harm American influence in the global AI race.


