Anthropic’s expansion in London could be a “political signal” to the US after a dispute between President Donald Trump's administration and the AI company.
San Francisco-based Anthropic announced it is to expand its presence in the UK capital with new office space for 800 people, amid speculation over its future at the forefront of the technology.
Anthropic has been embroiled in a dispute with Mr Trump after it refused to allow the US military to use its technology for domestic surveillance and autonomous weapons.
US Defence Secretary Peter Hegseth designated Anthropic a “supply chain risk” in March, the first time the label had been applied to a company. The move restricts how other military suppliers can work with the company.

London Mayor Sadiq Khan is said to have written to Anthropic chief executive Dario Amodei at the time of the fallout, encouraging the company to expand to the UK.
Rowan Wilkinson, associate researcher at the Chatham House think tank, said the dispute could have accelerated the pace of Anthropic’s expansion. “It leads to natural questions about whether it is political signalling to the Trump administration that they’re unhappy about this erosion and that they don’t take the threats lightly,” she told The National.
But she stressed that the decision would also be based on the UK being a desirable destination for tech companies seeking to enter the European market, owing to its lower corporation tax rates. Another factor is the appeal of the Knowledge Quarter in King’s Cross, which also has Google, OpenAI and Meta Offices.
“The economic and talent reasons mean it could likely have happened in the next few years,” she said.

Anthropic currently has more than 200 people based in London, which it described as one of its “most important research and commercial hubs outside the US”, in a statement on Thursday.
“Our expansion in the Knowledge Quarter gives us the room to grow into,” said Pip White, head of EMEA North at Anthropic. “The UK combines ambitious enterprises and institutions that understand what’s at stake with AI safety with an exceptional pool of AI talent – we want to be where all of that comes together."
The company has gained momentum for two of its products, the coding agent Claude Code and Mythos, which finds security flaws in software. The UK government's AI Security Institute has been testing Mythos, which is on a limited, supervised rollout to some US banks, and warned of cyber security threats that similar models could present in the future.
"Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said this week. "Future frontier models will be more capable still, so investment now in cyber defence is vital. AI cyber capabilities are dual use, while they pose security challenges, they can also help deliver game-changing improvements in defence."
Rival tech company OpenAI also signalled it would more than double its UK workforce this week, days after it withdrew from a major data centre project in north-eastern England. Plans for the Stargate data centre were abandoned because of high energy costs and concerns over future British copyright laws, after the UK government said it would not allow AI companies to use copyrighted works.
This was seen as a major blow to Prime Minister Keir Starmer’s ambitions to make the UK a centre for AI. Despite Britain's advantages, tech companies were likely to be “weighing up” the costs of setting up in the country instead of the EU, which has a larger market and uniform regulation, Ms Wilkinson said.
Another challenge is the Online Safety Act, as social media companies increasingly use AI to power their networks. “Some UK tech firms find it’s difficult to navigate and in some cases have highlighted it as more of a burden on their global policies on content moderation,” she added.


