President Joe Biden met with a group of artificial intelligence experts June 21 to discuss what risk management looks like when it comes to AI. Just a few days before, the European Parliament approved its AI Act, the world's first piece of risk-based AI regulation. And, albeit on a slightly smaller scale, New York City's regulations on the use of AI in making hiring decisions are set to take effect soon.
Despite these early formations of regulation, and despite his regular calls for more oversight, Sam Altman, the CEO of OpenAI, is still gearing up to make ChatGPT more powerful.
DON'T MISS: Why ChatGPT Can't Turn Into Marvel Villain Ultron (Yet)
Speaking with TIME about what ChatGPT updates may be coming within the next six months to a year, Altman promised a lot of improvements.
"We’ll get images and audio and video in there at some point, and the models will get smarter," he said.
The road to AI with human-adjacent intelligence, as Professor John Licato told The Street, is paved with image, video and auditory processing. Human intelligence is multi-faceted; right now, ChatGPT is a large language model (LLM). But as it grows and begins to process video, image and auditory data, it will gain more facets of those things that compose human intelligence.
And that, according to AI expert Gary Marcus, is dangerous, though not necessarily because of existential risk. Marcus' fear is less based on possible threats, centering instead around current harms and the ways in which improvements to these models could exacerbate those harms.
Citing risks of misinformation and enhanced online fraud, in addition to job loss, Marcus said, "We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them."
More Artificial Intelligence:
- Human Extinction From AI Is Possible, Developers Warn
- Protesters Say OpenAI CEO Is Dangerously Misled When It Comes to AGI
- AI Companies Beg For Regulation
Proper regulation, though, Marcus has said, could heavily curtail these risks.
Where the EU's AI regulation focuses on risk, splitting up potential AI models into four classifications -- unacceptable risk, high risk, limited risk and minimal risk -- New York City's pending piece of AI legislation is focused on one significant outcome of the technology: jobs.
The city added specific rules to a 2021 law last month that, starting in July, will require any company using AI to make hiring decisions to first notify candidates of their automated methodology, and second, subject themselves to annual bias audits.
Though the law only applies to workers in the city, labor experts, according to the New York Times, anticipate that this will have a broader influence.
Still, the wording of the law defining these automated hiring tools is narrow, something that could, according to the Times, impact the way the law is enforced.
In the midst of these early stages of regulation, Altman -- who has been publicly calling for government oversight for weeks -- privately lobbied the EU to water down its AI Act, according to TIME. OpenAI asked regulators to change the wording of the act so that general-purpose AI systems, like ChatGPT, would not be considered high risk, a classification that would require much more stringent oversight.
"By itself, GPT-3 is not a high-risk system but possesses capabilities that can potentially be employed in high-risk use cases," the document reads.
"The big tech companies' preferred plan boils down to ‘trust us,'" Marcus said at the Senate oversight hearing on AI last month. "The current systems are not transparent, they do not protect our privacy and they continue to perpetuate bias. Even their makers don't entirely understand how they work. Most of all, we cannot remotely guarantee that they're safe."