Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

AI Companies Beg For Regulation

AI is getting so good that apparently even politicians are worried about their jobs.

Senator Richard Blumenthal, D-Conn., opened Tuesday's Senate hearing on AI oversight by playing an opening statement that was written and, through voice-cloning AI tech, spoken aloud by ChatGPT. The result, according to Sen. Blumenthal, was indistinguishable from what he himself would have said. 

DON'T MISS: Musicians Find AI Terrifying, But Spotify's CEO (Sort of) Disagrees

The theme of the three-hour hearing, which featured testimony from OpenAI's CEO Sam Altman, AI expert Professor Gary Marcus and IBM vice president Christina Montgomery, centered around finding a balance between supporting innovation and preventing harm. 

Altman, despite explaining how artificial intelligence could help humanity address some of its most significant challenges, such as curing cancer or resolving climate change, also openly and repeatedly expressed the myriad threats that generative AI poses, and the exponential threat it could pose in the future. 

Pleading for Regulation of AI 

"We think that regulatory intervention by governments will be critical to mitigate the risks ofincreasingly powerful models," Altman said. 

A significant portion of the hearing -- from Senators determined not to make the same mistakes they made with social media -- focused around what this 'regulatory intervention' would look like.

Altman listed three areas of regulation that he would impose on his company and industry if given the opportunity. The first area -- and one that Altman mentioned multiple times throughout the hearing -- involves the establishment of a new oversight agency that would issue licenses to any companies attempting to develop AI models. An important component of this hypothetical agency is that it would have the power to take away these licenses if companies are found to be acting irresponsibly.

In conjunction with this agency, Altman would instate a set of safety standards which a model would have to pass in order to be released to the public. He cited one example of such safety standards that OpenAI currently employs: looking to see if a model can self-replicate.

In addition to the agency, Altman would want independent experts to audit every AI model to ensure it complies with safety standards. 

These regulatory arms of safety standards, oversight agencies and licensing are potential regulatory solutions that Marcus agreed with. 

"The big tech companies' preferred plan boils down to ‘trust us,'" Marcus said. "The current systems are not transparent, they do not protect our privacy and they continue to perpetuate bias. Even their makers don't entirely understand how they work. Most of all, we cannot remotely guarantee that they're safe."

This situation of an industry coming before the government and voluntarily asking for regulation is one that Senator Peter Welch, D-Vt., has never seen before. 

"I can’t remember when we’ve had companies come before us and plead with us to regulate them," he said. "‘Stop me before I innovate again.’”

'History is not a guarantee of the future.'

Beyond regulation, much of the hearing touched on the impact AI has and will continue to have on jobs

Altman said that the full scope of the impact AI could have on the labor market is difficult to predict.

"GPT-4 will, I think, entirely automate away some jobs, and it will create new ones that we believe will be much better," Altman said. "It’s important to think of GPT as a tool, not a creature. And GPT-4 is good at doing tasks, not jobs."

The question of the unpredictability of these exponentially powerful models, however, has both Altman and Marcus extremely concerned. 

"On jobs, history is not a guarantee of the future," Marcus said. 

DON'T MISS: Want a New Job? You May Be Required to Know ChatGPT to Land It

Explaining how every technological revolution in the past -- from cars to assembly lines to the printing press -- has resulted in new jobs, Marcus said that the AI revolution is going to be different. 

"I think in the long run, artificial general intelligence really will replace a large fraction of human jobs. What we have right now is just a small sampling of the AI that we will build," Marcus said. "The real question is what timescale. When we get to AGI, let’s say it's 50 years, that’s going to have profound effects on labor."

Altman agreed that, once AGI is achieved, the jobs landscape will change and shrink. He noted, however, that his greatest fear, and the very reason he testified, revolves around the damage this technology could cause to people and to society. 

"If this technology goes wrong, it could go quite wrong," he said. 

Altman's biggest concern, though, is not about the risks posed by the AI models currently in use. He wants a regulatory system to get a handle on AGI -- AI models that are equivalent to human intelligence in a hypothetical that is truly science fiction made real -- before it happens. 

The problem is that no one really knows when AGI might become a genuine possibility. 

"These systems are almost like counterfeit people," Marcus said, "and we don’t really understand what the consequences of that are." 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.