Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jenn Brice

a16z partner says he's tired of driving the AI regulation conversation

(Credit: Stuart Isett/Fortune)

After a winning crusade to defeat a California AI safety bill earlier this year, Andreessen Horowitz partner Martin Casado claims he’s tired of venture capitalists taking the lead on the regulation conversation. 

Casado was a particularly forceful voice among a chorus of VCs who opposed SB 1047, a proposal that would have required the makers of large-scale models to meet safety testing and risk mitigation requirements in order to curb the potential catastrophic consequences of AI, like the escalation of nuclear war. Gov. Gavin Newsom vetoed the bill after it had passed both legislative chambers.

“I’m a venture capitalist, so clearly I’ve got a bias, right? So I should be a voice, but I should not drive the conversation. But in that case I was actually driving the conversation,” he said. Going forward, he’s hopeful that academics and technologists will be the ones to inform policymakers.

Casado was joined by Dawn Song, a computer science professor at UC-Berkeley and codirector of the Berkeley Center for Responsible, Decentralized Intelligence. 

Song is a coauthor of A Path for Science‑ and Evidence‑Based AI Policy, a proposal that responds to the flurry of federal and state bills seeking to rein in AI risks. The “godmother of AI” and founder of World Labs, Fei-Fei Li, also signed on to the proposal. 

“There’s a madness about AI regulation,” Song said.

By her count, American lawmakers have weighed 120 federal proposals about AI in the past year, and there were 600 more across 45 of the 50 states. Casado called California’s SB 1047 the worst of that lot because of its developer liability requirement. He feared the liability would be like policing software developers. 

“Maybe it’s something worth doing with AI, maybe, but it’s a paradigmatic shift that is divergent from 30 to 40 years of policy,” he said.

The problem, Song noted, is that these proposals are rushed, ad hoc measures that don’t always account for unforeseen consequences and missed opportunities in regulation. Her policy proposal argues for better understanding of AI risks, greater transparency on design and development, better ways to monitor “post-deployment” risks and harms of AI, and methods to manage those risks. 

Song is also concerned about the fragmentation within the AI community like that which played out in the fight over SB 1047. 

But looking ahead, there’s uncertainty about what might happen at the federal level, as Donald Trump plans to scale back the Biden AI executive order, while the president’s incoming tech wingman, Elon Musk, was a supporter of the California bill. Still, Casado remains “cautiously optimistic.” 

“Anybody that thinks that they know what’s going to happen, I don’t think that they really understand the players,” he said.

Read more coverage from Brainstorm AI:

Amazon’s top AI exec says industry concerns that LLMs have ‘hit a wall’ are overblown. Says Jeff Bezos ‘very involved’ in AI efforts

Pitching investors is like the NFL draft says Colin Kaepernick — ‘it only takes one’

Stability AI’s new CEO, hired six months ago, says business growing by ‘triple digits’ and no more debt

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.