The U.K. government wants to make a hard pivot into boosting its
economy and industry with AI, and as part of that, it’s pivoting an
institution that it founded a little over a year ago for a very
different purpose. Today the Department of Science, Industry and
Technology announced that it would be renaming the AI Safety Institute
to the “AI Security Institute.” (Same first letters: [1]same URL.) With
that, the body will shift from primarily exploring areas like
existential risk and bias in large language models, to a focus on
cybersecurity, specifically “strengthening protections against the
risks AI poses to national security and crime.”
Alongside this, the government also announced a new partnership with
Anthropic. No firm services were announced but the MOU indicates the
two will “explore” using Anthropic’s AI assistant Claude in public
services; and Anthropic will aim to contribute to work in scientific
research and economic modeling. And at the AI Security Institute, it
will provide tools to evaluate AI capabilities in the context of
identifying security risks.
“AI has the potential to transform how governments serve their
citizens,” Anthropic co-founder and CEO Dario Amodei said in a
statement. “We look forward to exploring how Anthropic’s AI assistant
Claude could help UK government agencies enhance public services, with
the goal of discovering new ways to make vital information and services
more efficient and accessible to UK residents.”
Anthropic is the only company being announced today — coinciding with a
week of AI activities in Munich and [2]Paris — but it’s not the only
one that is working with the government. A series of new tools that
were unveiled in January were all powered by OpenAI. (At the time,
Peter Kyle, the secretary of state for Technology, said that the
government planned to work with various foundational AI companies, and
that is what the Anthropic deal is proving out.)
The government’s switch-up of the AI Safety Institute — launched
[3]just over a year ago with a lot of fanfare — to AI Security
shouldn’t come as too much of a surprise.
When the newly installed Labour government announced its [4]AI-heavy
Plan for Change in January, it was notable that the words “safety,”
“harm,” “existential,” and “threat” did not appear at all in the
document.
That was not an oversight. The government’s plan is to kickstart
investment in a more modernized economy, using technology and
specifically AI to do that. It wants to work more closely with Big
Tech, and it also wants to build its own homegrown big techs.
In aid of that, the main messages it’s been promoting have been
development, AI, and more development. Civil servants will have their
own AI assistant called “[5]Humphrey,” and they’re being encouraged to
share data and use AI in other areas to speed up how they work.
Consumers will be [6]getting digital wallets for their government
documents, and chatbots.
So have AI safety issues been resolved? Not exactly, but the message
seems to be that they can’t be considered at the expense of progress.
The government claimed that despite the name change, the song will
remain the same.
“The changes I’m announcing today represent the logical next step in
how we approach responsible AI development – helping us to unleash AI
and grow the economy as part of our Plan for Change,” Kyle said in a
statement. “The work of the AI Security Institute won’t change, but
this renewed focus will ensure our citizens – and those of our allies –
are protected from those who would look to use AI against our
institutions, democratic values, and way of life.”
“The Institute’s focus from the start has been on security and we’ve
built a team of scientists focused on evaluating serious risks to the
public,” added Ian Hogarth, who remains the chair of the institute.
“Our new criminal misuse team and deepening partnership with the
national security community mark the next stage of tackling those
risks.“
Further afield, priorities definitely appear to have changed around the
importance of “AI Safety”. The biggest risk the AI Safety Institute in
the U.S. is contemplating right now, is that it’s going to be
[7]dismantled. U.S. Vice President [8]J.D. Vance telegraphed as much
earlier this week during his speech in Paris.
TechCrunch has an AI-focused newsletter! [9]Sign up here to get it in
your inbox every Wednesday.
References
1.
https://www.aisi.gov.uk/
2.
https://techcrunch.com/2025/02/12/anthropic-ceo-dario-amodei-says-were-in-a-race-to-understand-ai-as-it-becomes-more-powerful/
3.
https://techcrunch.com/2023/11/01/politicians-commit-to-collaborate-to-tackle-ai-safety-us-launches-safety-institute/
4.
https://techcrunch.com/2025/01/13/uk-throws-its-hat-into-the-ai-fire/
5.
https://techcrunch.com/2025/01/20/uk-to-unveil-humphrey-assistant-for-civil-servants-with-other-ai-plans-to-cut-bureaucracy/
6.
https://techcrunch.com/2025/01/21/uk-plans-digital-wallet-for-drivers-licenses-and-other-id-plus-a-chatbot-powered-by-openai/
7.
https://techcrunch.com/2024/10/22/the-u-s-ai-safety-institute-stands-on-shaky-ground/
8.
https://techcrunch.com/2025/02/11/in-paris-jd-vance-skewers-eu-ai-rules-lauds-us-tech-supremacy/
9.
https://techcrunch.com/newsletters/