Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

AI industry holds its breath for Nvidia earnings and a vote on California's AI bill

Nvidia CEO Jensen Huang on stage holding one of the company's forthcoming Blackwell Ultra AI chips. (Credit: Annabelle Chih—Bloomberg vis Getty Images)

Hello and welcome to Eye on AI.

Anticipation. That’s the title of a great old Carly Simon song. And that's the vibe of today’s newsletter. There’s a lot of hotly awaited AI news this week.

Nvidia to report earnings

Markets will be watching Nvidia’s earnings announcement tomorrow to gauge the health of the AI boom. According to news reports, the company is expected to say that quarterly sales have more than doubled, even though the year-on-year revenue growth rate has slowed. Investors will be particularly keen to learn if rumors are true that the company’s next-generation Blackwell AI chip will be delayed due to supply chain issues, and if so, how lengthy the delays will be and how that will impact Nvidia’s revenue forecasts.

With Nvidia’s stock having largely recovered from late July’s big sell-off and now trading back close to record highs, there could be serious stock market trouble if Nvidia announces significant lags in Blackwell shipments or if there are other negative surprises in the company’s earnings. Because Nvidia is now seen as a bellwether for the entire AI boom—skeptics would say “bubble”—the fallout could spread well beyond Nvidia to other Magnificent Seven stocks and perhaps the wider S&P 500, of which Nvidia now constitutes 6.5% due to its $3.1 trillion market cap.

California's AI bill comes to a vote

The other eagerly anticipated AI news of the week is the fate of California’s proposed AI regulation, SB 1047, which is expected to come to a vote in the State Assembly sometime this week. The bill is designed to try to head off any catastrophic risks from the largest, most powerful AI models—those that would cost more than $100 million to train—but has proved controversial as my Fortune colleagues Sharon Goldman chronicled last month and Jenn Brice laid out in short explainer we published yesterday. AI’s biggest names have lined up on opposite sides of the debate. AI godfathers Geoff Hinton and Yoshua Bengio support the bill as “a positive and reasonable step,” while fellow Turing Award winner Yann LeCun opposes it as likely to stifle innovation, as do AI pioneers Andrew Ng and Fei-Fei Li. Elon Musk has come out in support, while most of Silicon Valley’s leading venture capital firms and top AI companies such as OpenAI and Microsoft, as well as Google and Meta, are against.

Thanks to lobbying by technology companies, the bill that the California Assembly will vote on has already been watered down significantly from its earlier versions. As originally proposed by State Sen. Scott Wiener, the bill would have created a legal duty of care on the part of AI developers to ensure their models do not result in what the bill calls “critical harms”—a term it defines as causing a chemical, biological, or nuclear attack that resulted in mass casualties, autonomously killing a lot of people in some other way, autonomously committing felonies that resulted in $500 million in damage, or creating a cyberattack that caused that amount of damage. Tech companies building AI systems would have been required to institute safety procedures to prevent AI systems from causing these harms and to prevent anyone from modifying them after they’d been trained so that the models might cause these harms. Model developers would also have to retain the ability to fully shut down a model if it could cause serious problems. A new state agency would have been set up to ensure compliance with the law and California's attorney general would have been able to sue companies for negligence if the agency determined the correct protocols were not being followed, even before a model was trained and deployed.

The version that is coming to a vote this week no longer establishes the new state AI regulator and no longer lets the AG act in advance of any actual incident. Instead, the Attorney General’s office will take on much of the compliance monitoring role that the AI agency was to have performed in the original version. AI developers will have to hire an outside auditing firm to ensure compliance and this firm will submit reports annually to the AG’s office. But law enforcement can only sue the AI developers for liability after a catastrophic incident has occurred.

Still, if SB 1047 passes it will be a watershed moment for AI regulation in the U.S., which has so far lagged the European Union—as well as China—in passing laws governing the training and use of AI. In the absence of Congress passing any AI laws—something which won’t happen until well after the next election—the California law may become a de facto national standard due to the presence of so many tech companies in the state.

If nothing else, the debate over the bill has been clarifying. As an article in the liberal political journal The Nation noted, SB 1047 has been a “mask-off moment” for the AI industry. It is ironic—and telling—to see companies such as OpenAI, whose CEO Sam Altman went before Congress and practically begged for AI regulation, or Microsoft, which has proposed AI model developers institute extensive know-your-customer requirements that are not dissimilar to those contained in SB 1047, line up to oppose the bill. If we ever thought maybe these guys were sincere when they said publicly that they wanted to be regulated, now we know the truth. We should never have given them the benefit of the doubt.

It's perhaps revelations such as this that have disillusioned many of those working on AI safety inside top AI companies. It turns out that a large portion of the AI safety researchers at OpenAI have departed the company in recent months, according to reporting from my fellow Fortune AI reporter Sharon.

Whether we believe that AI models powerful enough to cause significant, large-scale harm are close at hand, the departure of these researchers should trouble us because of what it may say about how cautious and safety-minded OpenAI and other companies are being about the models they are releasing currently. To date, some of the best methods to limit near-term risks from AI models—such as their ability to spew toxic language or recommend users’ self-harm—have come from AI safety researchers thinking about how to control future superpowerful AI.

As for regulation, I'm generally in favor of steps that would ensure AI doesn’t cause significant harm. But I don’t think state-by-state regulation makes much sense. Instead, we urgently need national rules and, probably, a national AI regulator similar to the state-level one originally proposed in the California bill. But we’ll see if we wind up getting one.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my new book, Mastering AI: A Survival Guide to Our Superpowered Future. It's out now in the U.S. from Simon & Schuster and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.