
The United Kingdom has signed a major deal with the U.S. firm Anthropic, and separately made a significant change to how it approaches AI safety.
Less than two years ago, the British government announced the foundation of the U.K. AI Safety Institute (AISI), which aimed to tackle security risks like the use of AI to make chemical or biological weapons, and the potential of humanity losing control of a superintelligent AI. The institute also had a partial focus on AI’s societal risks, like spreading misinformation and perpetuating bias.
On Thursday, the government recast the organization as the AI Security Institute. As its new name suggests, the reborn AISI still explores some of those security risks. However, it's no longer keeping an eye on societal risks, and it no longer appears to be focusing on potential for AI running amok.
To hammer home the change, the new AISI will feature a “criminal misuse team” working together with the Home Office, the U.K.’s security ministry.
The British government also said it will work with Anthropic to explore using AI to transform the country’s public services and drive scientific research. This is a first for the government, but it is not an exclusive deal; the government said it will try to strike similar partnerships with other leading AI companies.
“We look forward to exploring how Anthropic's AI assistant Claude could help U.K. government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to U.K. residents,” Anthropic CEO Dario Amodei said in a statement. No financial terms were mentioned.
Anthropic’s Economic Index, launched this week, will also come into play here. The index draws on anonymized conversations with Claude to infer how AI is being used across the economy, and the U.K. will use this information to “adapt its workforce and innovation strategies for an AI-enabled future,” the government said.
Shifting focus
The AISI’s revised mission appears to be part of the U.K.’s new strategy of falling into lockstep with President Donald Trump’s in the U.S.
Earlier this week, the U.K. caused some consternation in the AI community by refusing to sign the declaration emerging from the Paris AI Action Summit. The U.S. also declined to sign it.
On the surface, the U.S.’s reasoning was down to a desire to avoid excessive regulation of AI—the declaration referred to international frameworks and governance—but many saw the document’s references to inclusive AI and reducing digital divides as a guarantee that Trump’s anti-DEI administration wouldn’t sign them. The U.K.’s refusal was more of a surprise; its government cited concerns about “global governance” and national security.
A few weeks previously, one of Trump’s first acts as returning president was to rescind President Joe Biden’s 2023 executive order on providing guardrails for the technology, including in areas affecting civil liberties.
U.S. Vice President JD Vance told the summit this week that he was not in Paris “to talk about AI safety, which was the title of the conference a couple of years ago,” but rather to talk about “AI opportunity.” His message was heavy on avoiding being risk-averse when it comes to AI.
On Thursday, U.K. tech secretary Peter Kyle struck a very similar note.
“The changes I’m announcing today represent the logical next step in how we approach responsible AI development—helping us to unleash AI and grow the economy,” he said. “The main job of any government is ensuring its citizens are safe and protected, and I'm confident the expertise our Institute will be able to bring to bear will ensure the U.K. is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”
The government stressed in its statement that the AISI “will not focus on bias or freedom of speech,” and AISI chair Ian Hogarth insisted that “the Institute’s focus from the start has been on security.”
However, alongside that security focus, the AISI has also explicitly addressed societal issues like the potential for AI to manipulate public opinion, or to reinforce societal biases when used in transport or emergency services systems. These are things that former Prime Minister Rishi Sunak tasked it to do, and it even invited grant applications covering these very topics.
At the time of publication, the government had not replied to a question about who might monitor AI bias issues now that the AISI would no longer do so. Fortune has also asked Hogarth why the AISI no longer focuses on societal risks and the potential for future AI getting out of control.
“There are well-established harms of AI related to bias, discrimination and privacy. One of its central findings is that AI systems can amplify social and political biases, causing concrete harm and discriminatory outcomes. The government appears to be signaling it no longer sees bias and discrimination as a priority concern,” said Michael Birtwistle, an associate director of the Ada Lovelace Institute, a London-based independent AI research institute, in emailed comments.
“A more pared back approach from the Government risks leaving a whole range of harms to people and society unaddressed—risks that it has previously committed to tackling through the work of the AI Safety Institute,” Birtwistle added. “It’s unclear if there’s still a plan to meaningfully address them, if not in AISI.”
Update: This article was updated on Feb. 14th to include Birtwistle's comments, and again to correct the spelling of his name.