Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

AI safety advocates slam Trump administration’s reported targeting of standards agency

U.S. President Donald Trump delivers remarks after signing an executive order on expanding access to IVF at his Mar-a-Lago resort on February 18, 2025 in Palm Beach, Florida. (Credit: Joe Raedle—Getty Images)

When he returned to the presidency of the United States, one of the first things Donald Trump did was to rescind President Joe Biden’s executive order on AI safety. But that move did not undo Biden’s creation of the U.S. AI Safety Institute.

Now it seems the institute is effectively done for anyway. On Wednesday, Axios and Bloomberg both reported that the Trump administration is about to fire as many as 500 staffers at the National Institute of Standards and Technology (NIST), whose roughly 3,400 employees include the AI Safety Institute (AISI) and its staff.

As is the way with many of the cuts currently being undertaken by the administration and its Elon Musk–led DOGE “efficiency“ team, the targets are workers who are still on probation—typically a one-year period after their start dates at U.S. agencies.

For AISI—a body tasked with developing standards and guidelines for safe AI and evaluating the security of new models—this may prove fatal, as it is a relatively new organization where most of the staffers are still on probation. (The same applies to the part of NIST that has been administering the Biden-era CHIPS Act funding program for semiconductor manufacturers who onshore manufacturing in the U.S.)

AISI works with most of the country’s big AI companies to develop its guidelines and standards. OpenAI, Anthropic, Google, Apple, and Meta all signed up to help around a year ago, though Musk’s AI-focused businesses—xAI and Tesla—did not.

“Eliminating the AI Safety Institute would do nothing to make U.S. AI companies more competitive—and would undermine efforts to make sure that AI tools are safe and effective,” said Alexandra Reeve Givens, CEO of the Center for Democracy and Technology. “The AI Safety Institute was designed to play a basic, commonsense role coordinating the kind of work that needs to happen for the entire industry to succeed.”

Neither NIST nor the Commerce Department, of which it is part, had responded to Fortune’s request for comment at the time of publication.

‘A gift to China’

“These cuts, if confirmed, would severely impact the government’s capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever,” said Jason Green-Lowe, the executive director of the Center for AI Policy.

Green-Lowe said the move would deprive the country “of the eyes and ears we need to identify when AI is likely to trigger nuclear and biological risks,” adding: “The savings would be trivial, but the cost to our national security would be immense.”

“In a time when competition from China is increasing, the U.S. should be seeking to make sure that NIST has all the resources it needs to drive responsible AI innovation,” said Brad Carson, president of the nonprofit Americans for Responsible Innovation. “Cutting costs at the agency responsible for maintaining American AI leadership is a gift to China and will only hurt the U.S. in the long run.” 

If AISI really is about to be gutted, it would be the latest step in a wider shift away from the focus on AI safety.

Both the U.S. body and its British counterpart, also called AISI, were created in late 2023 as governments fretted about issues like AI perpetuating biases against minorities, or potentially causing existential threats to civilization. The U.S.’s version was announced at the U.K.’s AI Safety Summit, which was designed to be the first of many.

But with AI use becoming more widespread, times have changed. When the most recent iteration of the event was held in Paris earlier this month, it took place under the banner of the AI Action Summit. U.S. Vice President JD Vance used the occasion to call for less AI regulation, and his country refused to sign the summit’s declaration calling for responsible AI development.

Days later, the U.K. (which also didn’t sign the Paris declaration) recast its AI Safety Institute as the AI Security Institute, explicitly saying it would no longer focus on societal issues such as bias and the AI-fueled spread of disinformation.

Lutnick’s choice

Gutting the U.S. AISI may be in line with this innovation-first trend and President Trump’s antipathy toward his predecessor’s policies, but it doesn’t exactly fit with the stated stance of new U.S. Commerce Secretary Howard Lutnick, who was confirmed on Wednesday.

During his confirmation hearing three weeks ago, Lutnick was full of praise for NIST and its AI work in particular. “NIST has some of the greatest scientists in the world, and they understand AI technology,” he said. “This is an essential hub of knowledge of the American government, which I’m really excited to oversee.”

Lutnick also said he supported the U.S.’s standards-based approach, which has proved successful in the realm of cybersecurity. “We should try to have a light-touch model like that in AI. Set those standards so the world heeds our standards and goes with our standards,” he said. “It will be very important for America and something that I’m going to try to drive.”

Green-Lowe said in reaction to the reported firings that Lutnick should “act quickly to learn about the vital work being done by his department and to publicly explain how he will protect that work from the unintended consequences of broad budget cuts.”

“Efficiency is one thing; thoughtlessly crippling the only office protecting us against catastrophe is another,” he said.

Whatever happens, neither the U.S. AISI nor NIST as a whole currently has permanent leadership. NIST director Laurie Locascio stepped down at the start of the year to head up the American National Standards Institute (ANSI) and AISI director Elizabeth Kelly announced her departure two weeks ago.

“I can confidently say that there is no other group with the technical skill or subject matter expertise to match AISI across the entire U.S. government, and I look forward to seeing all the incredible work they will accomplish in the months and years ahead to advance the science of AI safety,” Kelly said in a LinkedIn post at the time.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.