Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sharon Goldman

It's AI's "Sharks vs. Jets"—welcome to the fight over California's AI safety bill

State Senator Scott Wiener (Credit: Bloomberg)

A California state bill has emerged as a flashpoint between those who think AI should be regulated to ensure its safety and those who see regulation as potentially stifling innovation. The bill, which heads to its final vote in August, is sparking fiery debate and frantic pushback among leaders from across the AI industry—even from some companies and AI leaders who had previously called for the sector to be regulated.

The legislation, California Senate Bill 1047, has taken on added significance as efforts to regulate AI at the federal level have proved elusive in a presidential election year. It aims to put guardrails on the development and use of the most powerful AI models by mandating that developers comply with various safety requirements and report safety incidents. 

The amped-up discourse and lobbying over the California bill, which passed the state’s Senate in May 32-1 and heads to a final vote in August, has reached a crescendo over the past few weeks. The state senator who introduced the bill, Scott Wiener, recently told Fortune he likens the fight, which has seen AI safety experts pitted against some of tech’s top venture capitalists, to the ‘Jets vs Sharks’—Silicon Valley meets West Side Story

“I did not appreciate how toxic the division is,” he said, a few days after releasing a public letter in response to “inaccurate, inflammatory statements" by startup incubator Y Combinator and venture capital firm a16z about the legislation. The letter came a week after a16z released its own open letter saying the bill would “stifle open-source AI development and have a downstream chilling effect not only on AI investment and expansion, but on the small business entrepreneurship that makes California what it is today.” 

There certainly are plenty of quarreling, arguing, and snarky memes on social media about SB-1047, whose full title is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. And at first glance, the debate might appear like a popcorn-eating, GIF-worthy clash between AI ‘doomers’—pessimists pushing for guardrails against AI’s alleged ‘existential’ risk to humanity—and AI ‘accelerationists’ who favor a no-holds-barred rush to AI development because they believe the technology’s benefits will vastly outweigh any harms it causes. 

But Wiener’s framing—of a gang war between two rival factions jockeying over turf—belies the seriousness of the issues beneath the political posturing of both sides. Many consider AI regulation essential to manage not only known risks associated with AI—from bias and privacy violations to job displacement—but to promote ethical standards and foster public trust. On the other hand, there are those who worry about regulatory capture—that regulation will end up advancing the interests of a select few AI model developers, like OpenAI, Google, Microsoft, Anthropic and Meta, at the expense of wider competition or the true interests of the public. Many were suspicious when OpenAI CEO Sam Altman, for example, famously implored Congress for AI regulation at a hearing in May 2023. Yet Congress, which held numerous hearings on AI regulation last year, has mostly given up on taking action until after the 2024 elections. 

SB-1047 has, up until now, moved swiftly towards becoming law. Its crafters focused on what they considered a fairly simple, narrow target for the legislation. Only companies that spend more than $100 million and use a specific, high level of computing power to train the largest and most sophisticated AI models—like OpenAI’s in-progress GPT-5—would be required to comply with safety testing and monitoring to prevent the misuse of ‘dangerous capabilities.’ These capabilities would include creating weapons of mass destruction or using AI to launch cyberattacks on critical infrastructure.

The bill’s supporters include AI luminaries Geoffrey Hinton and Yoshua Bengio and nonprofits like Encode Justice, a youth movement for ‘safe and equitable AI’. Another proponent is X.ai advisor Dan Hendrycks, whose nonprofit, the Center for AI Safety, is funded by Open Philanthropy—well-known for its ties to the controversial Effective Altruism (EA) movement that is strongly focused on the ‘existential risk’ of AI to humanity.

In a recent tweet, Y Combinator CEO Garry Tan took a dismissive swipe at EA. Responding to a list of EA-affiliated organizations supporting SB-1047, including those funded by Skype cofounder Jaan Tallinn, he wrote, “EA just doing EA things.”  

Besides a16z and Y Combinator, SB-1047’s critics include a wide swath of Big Tech, venture capitalists, startups, and open source organizations. AI luminaries including former Google Brain founder Andrew Ng, Stanford professor Fei-Fei Li and Meta chief scientist Yann LeCun have also come out against it, saying the bill is anything but simple or narrow. Instead, opponents insist the bill’s language is too vague. They say that the bill's focus on the AI models themselves, rather than how they are used, has led to uncertainty about compliance and left developers fearful of being legally liable for how models are deployed or modified by customers. They also maintain that the bill could consolidate power over AI in the hands of a few deep-pocketed tech behemoths, stifle the efforts of small startups and open source developers, and let China take the lead in AI development. 

“There are sensible proposals for AI regulations,” Ng told Fortune. “Unfortunately SB-1047 is not one of them.” 

Wiener and his fellow bill supporters say that is nonsense: “This is a light touch, basic safety bill,” Winer said. Sunny Madra, VP of political affairs at Encode Justice, a co-sponsor of the bill, said he didn’t expect the opposition to mount such a massive counter-offensive. “We really try to focus on what we think are super-reasonable issues,” he said. 

That has not reduced the resistance to the bill, which says developers cannot release models covered by the bill if there is an “unreasonable risk" of "critical harm." It also requires developers to comply with annual model audits, and submit a certification of compliance to a new division within the state's Government Operations Agency, “under penalty of perjury.” 

“I don’t think anyone really wants a small unelected board implementing vague safety standards on a whim,” said Daniel Jeffries, CEO of AI startup Kentaurus AI. “To me, the practical legislation is on the use cases, on security,” he explained. “Let’s have the conversation about autonomous weapons, or self-driving cars, or using the tech to clone your mom’s voice to scam you out of five grand.”

But Yacine Jernite, a researcher at open source community platform Hugging Face, took a different tack—pointing out that the intent of SB-1047 to make AI developers more accountable is “definitely in line with the positions we've expressed on regulatory proposals in the past.” However, he added that the way the bill is written misunderstands the technology ecosystem. For example, the models affected by the bill may not only be trained by large companies who want to integrate them into their products, but public or philanthropic organizations, or coalitions of researchers who have come together to train models.

“While those models have less public visibility than the ones supporting household name AI systems, they play an indispensable role in the scientific understanding and informed regulation of the technology,” Jernite said. 

While Hendrycks did not respond to a Fortune request for comment, he recently insisted to Bloomberg that venture capitalists would likely be against the bill “irrespective of its content,” and that the bill focuses on national security, particularly protecting critical infrastructure. Most companies are already doing safety testing, he said, to comply with President Biden’s AI Executive Order that was passed in October 2023. “This is just making it so that it’s law, as opposed to an executive order that’s rescindable by some future administration,” he said. 

Ng maintained that the tech community is “doing a lot of work to try to understand what might be harmful applications” and welcomed government involvement and funding in such efforts. “For example, I'd love to see regulations to put a stop to non-consensual deep fake porn,” he said. 

But Wiener explained that there are other AI bills advancing in the California legislature that focus on short term, immediate AI risks around issues such as algorithmic discrimination, deep fakes and AI revenge porn. “You can’t do everything in one bill,” he said, and emphasized his continued willingness to be collaborative. “We've made significant changes in response to constructive feedback, and we continue to invite that feedback,” he said. 

Jeffries, however, said that he has read every version of the bill along the way and that while changes have been made, "the essence of the bill remains the same," including the requirement to sign a certification of compliance under penalty of perjury. "Rules can change overnight,” he said. “They could move the threshold up or down. And the standards are written in a frustratingly vague way.” 

In his letter responding to a16z and Y Combinator, Wiener insisted that California's Attorney General could only file a suit if “a developer of a covered model (more than $100 million to train) fails to perform a safety evaluation or take steps to mitigate catastrophic risk and if a catastrophe then occurs.” 

Governor Newsom has not indicated whether he will sign the bill if it passes California's House. "We are doing outreach to the administration tas we do with big or complicated bill," Wiener said. "We obviously would love for the governor to provide feedback."

Even some AI companies with a reputation for advocating for AI safety, such as Anthropic, have stepped back from supporting the bill, perhaps fearing that it will constrain their own efforts to develop more advanced AI models. Anthropic CEO Dario Amodei said during an interview on the In Good Company podcast in June that the regulation envisioned in SB-1047 is too early—that industry consensus around a “responsible scaling policy” should come first.

Ng told Fortune that he has spoken to Sen. Wiener and had provided feedback. “He didn't say much during our call and I don't feel the changes in the bill have been responsive to the concerns that many of us in the tech world share,” he said. 

Meanwhile, Wiener insists his door remains open to discuss potential changes to SB-1047. “There are some excellent lobbyists on the bill that folks in the industry have hired...who try to engage constructively,” he said, as well as “industry folks that have not just taken a hard no position and been constructive in suggesting amendments, some of which we incorporated, some of which we don't agree with.” The process, he emphasized, has “been an actually fairly collaborative process, and I very much appreciate that.” His goal? “To get this right.” 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.