
OpenAI CEO Sam Altman has begged for AI regulation many times over the years, but times have changed. Now, the company wants the Trump administration to protect it from state-level AI bills in the U.S.
This was one of several suggestions OpenAI made Thursday in proposals directed at Trump administration advisors tasked with developing the U.S.’s “AI Action Plan.” President Donald Trump ordered officials to formulate such a plan earlier this year, and the consultation period closes Saturday.
OpenAI’s proposals come with frequent invocations of China and the supposed threat posed by its AI industry, in particular the headline-grabbing DeepSeek. If OpenAI doesn’t get its way, the company suggested, China will take the lead.
“We propose a holistic approach that enables voluntary partnership between the federal government and the private sector, and neutralizes potential [Chinese] benefit from American AI companies having to comply with overly burdensome state laws,” OpenAI global affairs chief Chris Lehane wrote.
OpenAI’s concerns about state AI bills are not misplaced. The country has so far failed to come up with any federal AI legislation so, in its absence, states have stepped in with numerous efforts of their own. As of last week there were a whopping 741 state AI bills pending, according to monitoring by the R Street Institute, a D.C. think tank.
Many overlap (Texas and California both have outstanding bills on the topic of AI-powered discrimination, for example) and the industry rightly fears an overly complex patchwork of laws that will make compliance difficult.
OpenAI’s idea is for the government to preempt these state laws. Its proposed voluntary framework would not set rules, but rather open “a single, efficient ‘front door’ to the federal government that would coordinate expertise across the entire national security and economic competitiveness communities.” OpenAI said the effort could be overseen by a “reimagined” AI Safety Institute—an organization the Biden Administration established but that the Trump administration has gutted through mass firings.
Displaying no small degree of chutzpah, OpenAI proposed that it and its peers could be incentivized to sign up by “creating glide paths for them to contract with the government, including on national security projects.”
Again, OpenAI invoked the specter of China, saying that U.S. government adoption of AI would set “an example of governments using AI to keep their people safe, prosperous, and free” that would present a ready counterpoint to Beijing’s use of AI to maintain state control and enforce Communist party doctrine.
Altman’s outfit cited China yet again when calling for the federal government to create a clear copyright exemption for AI training. Currently, training AI models on copyrighted data is a hotly-contested legal conundrum that is the subject of numerous lawsuits by authors, music labels, and other rightsholders. These cases remain mostly unresolved, although Thomson Reuters scored a possibly precedent-setting win against legal AI startup Ross Intelligence last month.
OpenAI argued that China was “unlikely to respect” the intellectual property regimes in the U.S. or the EU—where the new AI Act allows rightsholders to opt out of their property being used for AI training— “but already likely has access to all the same data, putting American AI labs at a comparative disadvantage while gaining little in the way of protections for the original IP creators.”
The company also called for tweaking of the so-called AI diffusion rules that the Biden Administration introduced in its dying days, limiting the numbers of advanced U.S. AI chips that most countries can import. As it stands, only 18 close U.S. allies can import the chips freely, with traditional allies such as Poland and Israel being subject to export caps.
OpenAI, which is reportedly finalizing its own AI chips so it doesn’t have to depend on Nvidia so much, said it would be a good idea to admit more countries to the uncapped top tier of the U.S. framework, as long as they “commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens.”