When OpenAI chief Sam Altman testified before the Senate in May, he discussed the myriad threats posed by artificial intelligence and touted the importance of regulating his exponentially growing industry.
But Altman's perspective on AI boils down to a brief line in his opening statement: "We believe that the benefits of the tools we have deployed so far vastly outweigh the risks."
Related: Meet Sam Altman, the man behind OpenAI's revolutionary ChatGPT
To Altman, the risks, the ones that these supposed benefits outweigh, are existential in nature.
OpenAI has said explicitly on multiple occasions that it is on a mission to create artificial general intelligence (AGI), AI that is generally more intelligent than humans. Despite the skepticism of many experts who don't think AGI will ever be possible, OpenAI is certain that the creation of AGI could boost the economy and allow humanity to flourish "to a degree that is impossible for any of us to visualize yet."
Altman has previously discussed such benefits as curing cancer and solving climate change as some of the potential results of a generally intelligent model.
But the risks of such a model, the company itself has said, are "existential" and "extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too."
Despite the severity of these risks, however unlikely they might be, OpenAI and its peers are racing ahead to achieve systems with greater and greater capabilities.
More artificial intelligence stories:
- Even regulation won't stop data scraping from AI companies, Quivr founder warns
- Experts explain the issues with Elon Musk's AI safety plan
- U.S. expert warns of one overlooked AI risk
But according to a new poll by the Institute for AI Policy, the vast majority of people feel the opposite way when it comes to AI.
71% of those surveyed said that the potential risks of AI are greater than its potential benefits; developers should "err on the side of caution."
A further 66% of those surveyed said that AI companies should be required to have regulatory representation on their boards.
Earlier polling by the AIPI found that more than 80% of respondents surveyed don't trust tech executives to regulate themselves when it comes to AI; 82% of people want to slow down the development of the technology, while 83% believe AI could accidentally cause a catastrophic incident.
This highlights the less dramatic, though no less severe, concerns that exist around AI: worker exploitation, enhanced fraud, the dramatic exacerbation of social inequities, and potential political instability.
"There's going to be a hell of a lot of abuse of these technologies, and not necessarily always deliberately, but simply out of complacency or out of ignorance," Nell Watson, a prominent AI researcher and tech ethicist told TheStreet. "And that's really the problem with AI. It will give you what you think you want, but it turns out that's not actually what you intended."
On Sept. 13, prominent tech executives were in Washington to discuss the potential regulation of artificial intelligence. The participants included OpenAI CEO Sam Altman, Elon Musk, Meta's Mark Zuckerberg, Google's Chief Executive Sundar Pichai and Nvidia's CEO Jensen Huang, among others.
If you work for OpenAI, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223
Action Alerts PLUS offers expert portfolio guidance to help you make informed investing decisions. Sign up now.