Since ChatGPT's consequential release last year, much of the world has been locked in a debate about the risks, harms and benefits of artificial intelligence. Open letters pointing to the extinction-level threat of a superintelligent AI model have been signed, Congressional hearings on AI have been held and companies have pledged responsibility, largely without sharing the details that make their models tick.
In March, billionaire tech titan Elon Musk added his weighty signature to an open letter calling for the immediate pause of the development of more powerful AI models. Just a few months later, Musk announced the launch of a new company to add to his resume: xAI, an AI company whose mission is to "understand the true nature of the universe."
Related: The ethics of artificial intelligence: A path toward responsible AI
In a Twitter Spaces following the announcement of the company, Musk explained that, despite his extinction-level fears of an uncontrolled AI, his "view on safety is try to make it maximally truth-seeking, maximally curious."
If AI is designed to be curious, Musk argued, it will find humanity interesting. So long as a superintelligent AI finds AI interesting, it won't try to engineer some sort of global extinction event.
Experts, in response to his perspective, were critical of his framing.
"I don't think that attributing human attributes to AI models is a good idea, or accurate in any way. Models can't be curious because they're not sentient," AI expert and researcher Dr. Sasha Luccioni told The Street at the time.
Further, such fears of an extinction risk around AI have no basis in reality, according to Dr. Suresh Venkatasubramanian, an AI researcher who in 2021 co-authored the White House's AI Bill of Rights in his then capacity as a White House tech advisor.
"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Venkatasubramanian told TheStreet in September.
"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."
Active harms of AI include bias, algorithmic discrimination, workers' rights, artists' rights, cybersecurity, national security and criminal justice.
Related: Biden signs sweeping new executive order on the heels of OpenAI's latest big announcement
The Joe Rogan Experience
Speaking to Joe Rogan on an Oct. 31 episode of his podcast, Musk elaborated on his fears of AI, saying that the creation of a digital superintelligence "seems like it could be dangerous."
But while the biggest AI danger for Musk seems like a sort of "Terminator" scenario, despite the lack of scientific evidence to support this, he thinks such a situation will only occur if a given AI model is programmed with what he refers to as "the woke mind virus," a term apparently used to refer to left-wing ideology.
"If AI is implicitly programmed with values that have led to the destruction of downtown San Francisco, then you could implicitly program an AI to believe that extinction of humanity is what it should try to do," Musk said, referring to the philosophy of Les Knight, the founder of the Voluntary Human Extinction movement.
"The AI could conclude, like he did, he literally said: 'There are eight billion people in the world, it would be better if there were none.' And engineer that outcome," Musk said.
He did not elaborate on how an AI model would engineer such an outcome.
Musk added that ChatGPT is "pretty woke." He cited a March post from Jordan Peterson in which ChatGPT, when asked to write a poem about former President Donald Trump and President Joe Biden, tilted negative for Trump and positive for Biden.
"That's a little sketchy," Rogan said.
Related: Artificial Intelligence is a sustainability nightmare - but it doesn't have to be
Musk said that the most likely outcome of AI will be a "good outcome," but he added that such an outcome is not guaranteed.
"I think we have to be careful how we program the AI and make sure it is not accidentally anti-human," he said. "The accidentally extinctionist-AI, you wouldn't want that."
Musk has often railed against the dangers of the so-called "woke mind virus." He said in November 2022 that such ideology is "pushing civilization towards suicide."
And in September, Musk wrote: "Woke is fundamentally anti-human."
His approach to AI seems to be to make it pro-human, though he has not explained how he intends to accomplish that.
"I'm just generally concerned about AI safety. But what should we do about it? Have some kind of regulatory oversight," Musk said, comparing AI to nuclear weaponry, saying it is "maybe more dangerous than a nuclear bomb."
Related: Huge new ChatGPT update highlights the dangers of AI hype
"We're on the cusp of an artificial intelligence revolution," Musk said. "For a very long time, we have been the smartest creatures on Earth. That's been our defining characteristic. What happens when there's something way smarter than us? Where does it go?"
Biden signed an executive order on AI on Oct. 31, the first legitimate action the government has taken to address the technology. The order lays out a wide variety of requirements and protections, though left some experts feeling skeptical based on loopholes around transparency requirements and uncertainty around enforcement.
Such regulation, however, is an important step on the path toward a positive AI future.
"We need laws, regulations, we need this now. What that will trigger in the medium term is market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service," Venkatasubramanian said in September. "The laws and regulations will create a demand for this kind of work."
Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.