Taipei, Taiwan – Artificial intelligence poses a “risk of extinction” that calls for global action, leading computer scientists and technologists have warned.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war,” a group of AI experts and other high-profile figures said in a brief statement released by the Center for AI Safety, a San Francisco-based research and advocacy group, on Tuesday.
The signatories include technology experts such as Sam Altman, chief executive of OpenAI, Geoffrey Hinton, known as the “godfather of AI”, and Audrey Tang, Taiwan’s digital minister, as well as other notable figures including the neuroscientist Sam Harris and the musician Grimes.
The warning follows an open letter signed by Elon Musk and other high-profile figures in March that called for a six-month pause on the development of AI more advanced than OpenAI’s GPT-4.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The rapid advancement of AI has raised concerns about potential negative consequences for society ranging from mass job losses and copyright infringement to the spread of misinformation and political instability. Some experts have raised fears that humanity could one day lose control of the technology.
While current AI has yet to achieve artificial general intelligence (AGI), potentially allowing it to make independent decisions, researchers at Microsoft in March said that GPT-4 showed “sparks of AGI” and was capable of solving “novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting”.
Since then, warnings about the potential dangers of AI have grown.
Last month, Hinton, a renowned computer scientist, quit his job at Google so he could spend more time advocating about the risks of AI.
In an appearance before the United States Congress earlier this month, Altman called on legislators to quickly develop regulations for AI technology and recommended a licensing-based approach.
The US and other countries are scrambling to come up with legislation that balances the need for oversight with promising technology.
The European Union has said it hopes to pass legislation by the end of the year that would classify AI into four risk-based categories.
China has also taken steps to regulate AI, passing legislation governing deep fakes and requiring companies to register their algorithms with regulators.
Beijing has also proposed strict rules to restrict politically-sensitive content and require developers to receive approval before releasing generative AI-based tech.