The team behind the artificial intelligence (AI) bot ChatGPT say an international watchdog and "strong" public oversight are needed to regulate and protect humanity from “superintelligent” AIs.
"We are likely to eventually need something like an IAEA (International Atomic Energy Agency) ... that can inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security, etc," OpenAI CEO Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever wrote on the company’s website.
They say an AI governing body built around the model of the IAEA would reduce the “existential risk” such systems could pose, and stop humanity from accidentally creating something dangerously powerful.
Initial ideas for governance of superintelligence, including forming an international oversight organization for future AI systems much more capable than any today: https://t.co/9hJ9n2BZo7
— OpenAI (@OpenAI) May 22, 2023
Fast-moving tech
OpenAI’s executives are pushing for coordination and cooperation among major AI developers to ensure superintelligence is integrated safely into society.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they said.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past.
“We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
Individual companies should be held to an extremely high standard of acting responsibly, they added.