Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

ChatGPT Creators Propose New Rules to Keep the Tech Safe for Users

Despite the mounting threats posed by it, artificial intelligence is showing no signs of slowing down. The capabilities of generative AI are increasing almost daily, and as the software becomes more powerful, so too do the risks, from simple misinformation to job loss. 

In light of this, OpenAI CEO Sam Altman has been talking about regulation for some time now. He appeared before a Senate hearing on AI oversight last week, testifying to the different risks posed by generative AI and AGI and offering some tentative solutions for regulation. 

In a Monday post co-authored by Altman, the Microsoft-backed software company solidified some of these ideas, laying out an initial groundwork for the "governance of superintelligence." 

DON'T MISS: AI Companies Beg For Regulation

"It’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations," the post reads. "Superintelligence will be more powerful than other technologies humanity has had to contend with in the past."

"Given the possibility of existential risk, we can’t just be reactive."

Altman and OpenAI -- echoing a similar sentiment expressed during last week's hearing -- are less concerned about current AI models and are much more concerned about the exponential rate at which more powerful AI is being developed, where that could lead and what damage it could wreak. 

Hence 'superintelligence.'

Governance of Superintelligence

Altman laid out several areas of regulation designed to keep humanity on top of AI, rather than the other way around. 

The first involves coordination among the leading AI developers to ensure the safe and smooth deployment of powerful AI models into society. Altman suggested that this coordination could be achieved through government-led projects, or through an oversight agency which would allow these developers to agree on an annual limit to AI growth. 

The second centers around the creation of an international oversight agency that would serve as a final authority for developers designing AI above a "certain capability." 

For efforts above that capability, this proposed international agency would "inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say."

Companies and projects developing AI below that specified threshold would not be beholden to these regulations, which include licensing and audits. 

Altman reiterated that the risk posed by the current models feels "commensurate with other Internet technologies," adding that superintelligence systems -- which will come at some indeterminate point in the future -- "will have power beyond any technology yet created."

Why Make AI at All?

Considering the dangers -- OpenAI's post repeats the phrase "existential risk" more than once -- questioning the existence of this technology at all is a valid response, though OpenAI seems to have an answer to that as well. 

OpenAI reiterated once again that it believes generative AI can lead to huge improvements across society. 

"The economic growth and increase in quality of life will be astonishing," the post reads. 

Its second answer to the question of 'why?' is that it is basically too late to stop. And even if some people did, it would be impossible to halt every country's work on superintelligence, which could give bad actors a dangerous leg up.

"Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work," the post says. 

"So we have to get it right."

Sign up for Real Money Pro to learn the ins and outs of the trading floor from Doug Kass’s Daily Diary.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.