Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jenn Brice

Landmark bill aimed at reducing AI's potentially catastrophic danger would either save lives or stifle innovation

(Credit: David Paul Morris/Bloomberg via Getty Images)

A bill aimed at curbing the risk of artificial intelligence being used for nefarious purposes, such as cyberattacks or to perfect biological weapons, is set to be voted on by California state legislators this week.

California Senate Bill 1047, authored by State Sen. Scott Wiener, would be the first of its kind in the U.S. to require AI companies building large-scale models to test them for safety. 

California lawmakers are considering dozens of AI-related bills this session. But Wiener’s proposal, called the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has captured national attention due to vocal pushback from Silicon Valley, the hotbed for U.S. AI development. Opponents say creating burdensome technical requirements and potential fines in California would effectively stifle the country’s innovation and global competitiveness.

OpenAI is the latest AI developer to voice opposition, arguing in a letter Wednesday that AI regulation should be left to the federal government and claiming that companies will leave California if the proposed legislation passes.

The state assembly is now set to vote on the bill, which Wiener recently amended in response to the tech industry’s criticism. But he says the new language does not concede to all of the issues the industry raised. 

“This is a reasonable, light-touch bill that's not going to in any way impede innovation, but will help us get ahead of the risks that are present with any powerful technology,” Wiener told reporters during a news conference on Monday.

What would the bill do?

The bill would require companies building large AI models, those that cost more than $100 million to train, to limit any significant risks in the system that they find through safety testing. That would include creating a “full shutdown” capability — or a way to pull the plug on a potentially unsafe model in dire circumstances.

Developers would also be required to create a technical plan to address safety risks and to maintain a copy of that plan as long as the model is available, plus five years. Firms like Google, Meta, and OpenAI with big AI operations have already made voluntary commitments to the Biden administration to manage AI risks, but the California legislation would introduce legal obligations and enforcement to them. 

Every year, a third-party auditor would assess whether the company is complying with the law. Additionally, companies would have to document compliance with the law and report any safety incidents to California’s Attorney General. The Attorney General’s office could bring civil penalties up to $50,000 for first violations and an additional $100,000 for subsequent violations. 

What are the criticisms? 

A big part of the tech industry has criticized the proposed bill for being too burdensome. Anthropic, a buzzy AI firm that markets itself as safety focused, had argued an earlier version of the legislation would have created complex legal obligations that would stifle AI innovation, such as the ability for the California Attorney General to sue for negligence, even if no safety disaster occurred. 

OpenAI suggested that companies will leave California if the bill passes to avoid its requirements. It also insisted that AI regulation should be left to Congress to prevent a confusing patchwork of legislation from being enacted across the states.

Wiener responded to the idea of companies fleeing California as a “tired argument,” noting that the bill’s provisions would still apply to businesses that offer their services to Californians, even if it isn’t headquartered there.

Last week, eight members of U.S. Congress urged Gov. Gavin Newsom to veto SB-1047 due to the obligations it would create for companies that make and use AI. Rep. Nancy Pelosi joined her colleagues in opposition, calling the measure “well-intentioned but ill informed.” (Wiener has been eyeing the Speaker Emerita’s House seat, which could entail a future face-off against her daughter, Christine Pelosi, per Politico.) 

Pelosi and fellow members of Congress side with the “Godmother of AI,” Dr. Fei-Fei Li, a Stanford University computer scientist and former Google researcher. In a recent op-ed, Li said the legislation “will harm our budding AI ecosystem,” specifically smaller developers that are “already at a disadvantage to today’s tech giants.” 

What do supporters say?

The bill has garnered support from various AI startups, Notion co-founder Simon Last, and the “godfathers” of AI, Yoshua Bengio and Geoffrey Hinton. Bengio said the legislation would be “a positive and reasonable step” to make AI safer while encouraging innovation.

Without sufficient safety measures, the bill’s supporters fear the severe consequences of unchecked AI could pose existential threats, like increased risks to critical infrastructure and the creation of nuclear weapons.

Wiener defended his “common-sense, light-touch” legislation, noting that it would only require the biggest AI companies to adopt safety measures. He also touted California’s leadership on U.S. tech policy, casting doubt that Congress would pass any substantive AI legislation in the near future.

“California has repeatedly stepped in to protect our residents and to fill the void left by Congressional inaction,” Wiener responded, noting the lack of federal action on data privacy and social media regulation.

What’s next?

The most recent amendments take into account many of the concerns voiced by the AI industry, Wiener said in his latest statement on the bill. The current version proposes a civil rather than criminal penalty, as was originally in the bill, for lying to the government. It also removes a proposal for a new state regulatory body that would oversee AI models.

Anthropic said in a letter to Newsom that the benefits of the legislation as amended likely outweigh possible harms to the AI industry, with the key benefits being transparency with the public about AI safety and a push for companies to invest in risk reduction. But Anthropic is still wary of the potential for overly broad enforcement and expansive reporting requirements.

“We believe it is critical to have some framework for managing frontier AI systems that roughly meets these three requirements,” whether or not that framework is SB-1047, Anthropic CEO Dario Amodei told the governor.

California lawmakers have until August 31, the end of session, to pass the bill. If approved, it would go to Gov. Gavin Newsom for final approval by the end of September. The governor has not indicated whether he plans to sign the legislation.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.