Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Paolo Confino

Tom Siebel says there's no need for new AI regulatory agency

C3.ai CEO Tom Siebel (Credit: Bloomberg)

The landmark AI safety bill sitting on California Governor Gavin Newsom’s desk has another detractor in longtime Silicon Valley figure Tom Siebel. 

SB 1047, as the bill is known, is among the most comprehensive, and therefore polarizing, pieces of AI legislation. The main focus of the bill is to hold major AI companies accountable in the event their models cause catastrophic harm, such as mass casualties, shutting down critical infrastructure, or being used to create biological or chemical weapons, according to the bill. The bill would apply to AI developers that produce so-called “frontier models,” meaning those that took at least $100 million to develop. 

Another key provision is the establishment of a new regulatory body, the Board of Frontier Models, that would oversee these AI models. Setting up such a group is unnecessary, according to Siebel, who is CEO of C3.ai. 

“This is just whacked,” he told Fortune

Prior to founding C3.ai (which trades under the stock ticker $AI), Siebel founded and helmed Siebel Systems, a pioneer in CRM software, which he eventually sold to Oracle for $5.8 billion in 2005. (Disclosure: The former CEO of Fortune Media, Alan Murray, is on the board of C3.ai).

Other provisions in the bill would create reporting standards for AI developers requiring they demonstrate their models’ safety. Firms would also be legally required to include a “kill switch” in all AI models.  

In the U.S. at least five states passed AI safety laws. California has passed dozens of AI bills, five of which were signed into law this week alone. Other countries have also raced to pass legislation against AI. Last summer China published a series of preliminary regulations for generative AI. In March the EU, long at the forefront of tech regulation, passed an extensive AI law. 

Siebel, who also criticized the EU’s law, said California’s version risked stifling innovation. “We're going to criminalize science,” he said. 

AI models are too complex for ‘government bureaucrats’

A new regulatory agency would slow down AI research because its developers would have to submit their models for review and keep detailed logs of all their training and testing procedures, according to Siebel. 

“How long is it going to take this board of people to evaluate an AI model to determine that it's going to be safe?,” Siebel said. “It's going to take approximately forever.”

A spokesperson for California State Senator Scott Weiner, SB 1047’s sponsor, clarified the bill would not require developers to have their models approved by the board or any other regulatory body.

“It simply requires that developers self-report on their actions to comply with this bill to the Attorney General," said Erik Mebust, communications director for Weiner. "The role of the Board is to approve guidance, regulations for third party auditors, and changes to the covered model threshold.”

The complexity of AI models, which are not fully understood even by the researchers and scientists that created them, would prove too tall a task for a newly established regulatory body, Siebel says. 

“The idea that we're going to have these agencies who are going to look at these algorithms and ensure that they're safe, I mean there's no way,” Siebel said. “The reality is, and I know that a lot of people don't want to admit this, but when you get into deep learning, when you get into neural networks, when you get into generative AI, the fact is, we don't know how they work.” 

A number of AI experts in both academia and the business world have acknowledged that certain aspects of AI models remain unknown. In an interview with 60 Minutes last April Google CEO Sundar Pichai described certain parts of AI models as a “black box” that experts in the field didn’t “fully understand.”   

The Board of Frontier Models established in California’s bill would consist of experts in AI, cybersecurity, and researchers in academia. Siebel had little faith that a government agency would be suited to overseeing AI. 

“If the person who developed this thing—experienced PhD level data scientists out of the finest universities on earth—can not figure out how it could work,” Siebel said of AI models. “How is this government bureaucrat going to figure out how it works? It's impossible. They're inexplicable.”

Laws are enough to regulate AI safety

Instead of establishing the board, or any other dedicated AI regulator, the government should rely on new legislation that would be enforced by existing court systems and the Department of Justice, according to Siebel. The government should pass laws that make it illegal to publish AI models that could facilitate crimes, cause large scale human health hazards, interfere in democratic processes, and collect personal information about users, Siebel said. 

“We don't need new agencies,” Siebel said. “We have a system of jurisprudence in the Western world, whether it's based on French law or British law, that is well established. Pass some laws.”

Supporters and critics of SB 1047 don’t fall neatly along political lines. Opponents of the bill include both top VCs and avowed supporters of former President Donald Trump, Marc Andreesen and Ben Horowitz, and former Speaker of the House Nancy Pelosi, whose congressional district includes parts of Silicon Valley. On the other side of the argument is an equally hodge podge group of AI experts. They include AI pioneers such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, and Tesla CEO Elon Musk, all of whom warned of the technology’s great risks. 

“For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public,” Musk wrote on X in August. 

Siebel too was not blind to the dangers of AI. It “can be used for enormous deleterious effect. Hard stop,” he said. 

Newsom, the man who will decide the ultimate fate of the bill, has remained rather tight lipped. Only breaking his silence earlier this week to say he was concerned about the bill’s possible “chilling effect” on AI research, during an appearance at Salesforce’s Dreamforce conference. 

When asked about which portions of the bill might have a chilling effect and to respond to Siebel’s comments, Alex Stack, a spokesperson for Newsom, replied “this measure will be evaluated on its merits.” Stack did not respond to a follow up question regarding what merits were being evaluated. 

Newsom has until Sept. 30 to sign the bill into law.

Updated Sept. 20 to include comments in the 12th and 13th paragraphs from state Sen. Weiner's office.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.