Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

OpenAI's Sam Altman got off easy on Capitol Hill

OpenAI CEO Sam Altman testifying before a Senate Judiciary Commitee subcomittee. (Credit: Win McNamee—Getty Images)

Hello, everyone. All eyes today were on Capitol Hill where OpenAI CEO Sam Altman testified before a subcommittee of the Senate Judiciary Committee, which is holding hearings on possible regulation of A.I.

Altman, in his prepared remarks, told the senators that “the regulation of A.I. is essential.” He came out in favor of “appropriate safety requirements, including internal and external testing prior to release,” for A.I. software and some kind of licensing and registration regime for A.I. systems beyond a certain capability. But at the same time, Altman called for a governance framework that is “flexible enough to adapt to new technological developments” and said that regulation should balance “incentivizing safety while ensuring that people are able to access the technology’s benefits.”

Overall, Altman got off easy. Senator Richard Blumenthal (D-Conn.), who called the hearing and chairs the subcommittee, seemed to do so in a spirit of honest inquiry, and Senator Josh Hawley (R-Mo), the subcommittee’s ranking minority member, said he was there mostly “to try to get my head around what these models can do.” The most pressing questioning of Altman came from Senator Marsha Blackburn (R-Tenn.) who was very concerned about generative A.I.’s copyright implications and impact on the Nashville-based country music scene. Besides copyright infringement, there are very real harms from generative A.I. already evident—including misinformation and election interference, fraud, bias, defamation, exploitative data gathering practices, data privacy violations, emerging evidence of wage depression in some fields, and environmental impacts—and it was heartening to see that the senators at least seemed to be aware of many of these issues.

But the spirit of the questioning was, by and large, collegial. And there is plenty the senators could have grilled Altman on. For instance, how does OpenAI justify its policy of trying to learn about A.I. safety and risks in large part through a policy of releasing products into the world and then seeing how people might try to use and abuse them? We don't let drug companies or car companies do that. Should we let A.I. companies? Also, why has OpenAI said so little about GPT-4, including critically how big the model is and what data it was trained on? Would OpenAI be willing to divulge that information to a government agency? Why did OpenAI allow Microsoft to use a version of GPT-4 in the creation of its Bing Chat feature that it knew was not as safe as the version it had used for ChatGPT? The list goes on.

Altman’s advocacy for some rules is not surprising. Technology companies know that regulation is likely coming, and they are trying their best to shape it to their advantage. Altman explicitly called for licensing of generative A.I. models in his testimony and my suspicion is that the other companies selling access to proprietary A.I. models, such as Anthropic, Microsoft, and Google, will advocate some kind of licensing regime as well.

I think they will also push for a system that holds the companies building generative A.I. responsible for putting reasonable safeguards around the technology and taking steps to prevent the dangerous uses and misuses of the technology. Christina Montgomery, the chief privacy and trust officer at IBM, who also testified at the hearing, said that IBM thought there should be a “reasonable care” standard applied to the creators of generative A.I. She also advocated a sector-specific, risk-based approach to A.I. regulation that sounded very similar to the way the European Union has framed its new A.I. Act. Gary Marcus, the emeritus New York University professor of cognitive psychology who has emerged as a leading skeptic of deep learning approaches to A.I. and who has been sounding the alarm about the dangers of generative A.I., told the senators that he too favored a licensing model.

But, of course, the reasons tech companies working on proprietary models want such a system—and it was disappointing not to see more discussion of this in the Senate hearing—is not altruism. Among the biggest competitive threats these companies face is open source A.I. software. In this rapidly moving field, no one is moving faster than the open source community. It has proved remarkably innovative and agile at matching the performance and capabilities of the proprietary models, but doing so with A.I. models that are much smaller, easier and less expensive to train, and which can be downloaded for free. These open source companies would all struggle with a licensing regime because it would be difficult for them to put in place any robust limits and controls on how people use the models they've created—and open source, by its very nature, cannot prevent people from modifying code and removing any safeguards that have been put in place.

Altman and the other proprietary model purveyors know it. Altman even said in response to questions from Vermont Democratic Senator Peter Welch that he realized that there was a danger of regulatory capture—that large, wealthy companies would design rules that only they could meet—and said it was not a desirable outcome. Altman also said that maybe not all companies should be subject to the licensing regime he advocates. “We don’t want to stop our open-source community,” he said. But then he drew the line at a set of capabilities—such as a chatbot that can influence or shape someone’s political views—that are already within the reach of open-source alternatives to ChatGPT. He also mentioned the design of novel chemicals or pathogens, although that too is something for which some open-source models exist.

If the U.S. wants to see how difficult it is going to be to balance a desire to avoid the harms of generative A.I. and also protect a vibrant open-source community, it only has to look at Europe. There, the new A.I. Act, which is nearing finalization, has sparked belated alarm in the past two weeks from open source companies over the law’s provisions that would require those creating foundation A.I. models to monitor and impose controls on their use. Laion (Large-scale AI Open Network), a German-based research organization that has created some of the datasets that have been used to train foundation models, particularly the open-source text-to-image generation models, wrote a letter to the European Parliament, signed by many prominent European A.I. researchers, including Juergen Schmidhuber, that called for the A.I. Act to exempt open-source models and those built for research purposes from the law’s requirements.

In this context, it was intriguing to read a report in tech publication The Information earlier this week that, citing an anonymous source, said OpenAI was preparing to release an open source generative A.I. model of its own. If that’s true, I’m not quite sure what OpenAI’s strategy is. Right now, its business model is based around selling access to proprietary models through its API. Perhaps Altman is hedging his bets—hoping that most users will prefer accessing its largest models through its API, but wanting to have a hand in the open-source world too in case those models ultimately prove more popular with business customers.

The Judiciary Committee plans more hearings on A.I. in the near future. Let’s hope they start to ask some of the gurus for the open-source world—Clem Delangue from Hugging Face, Emad Mostaque from Stability AI, Harrison Chase from LangChain, and many of the academic researchers working in the area—to testify too. As I said in last week’s newsletter, it will be impossible to regulate A.I. effectively, and deal with the potential risks from generative A.I., without figuring out what to do about the open-source models.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.