Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Luc Olinga

Elon Musk Sounds the Alarm on AI and ChatGPT

Artificial intelligence this year has been the technology at the center of all conversations in tech circles. 

The Silicon Valley luminaries say that the AI ​​era is about to begin as the technology takes a major leap forward in utility. 

For the general public, AI was limited to Siri and Alexa, respectively Apple's and Amazon's virtual assistants; the chatbots many call centers and customer-service spaces use; and the emails you get suggesting moments of memories from your photos. 

Some consumers had also been able to experiment with AI through the driver-assistance systems that now equip many vehicles. 

But on Nov. 30, startup OpenAI, co-founded by the billionaires Elon Musk, Peter Thiel and others, introduced ChatGPT. That's a chatbot billed as the game-changer for the world, heralding a new generation of more sophisticated chatbots, capable of providing human-like responses to queries. 

These chatbots are designed to be more precise and creative and to respond to complex queries. You will interact with them. This is possible thanks to the GPT large language model and Lamda technologies.

Success -- and Strange Conversations

The success was immediate. ChatGPT is now used by millions of consumers around the world. 

And the tech also launched a race between the giants of tech, in particular Microsoft (MSFT) against Google (GOOGL)

The software giant, which was already an investor in OpenAI, decided to inject another $10 billion into the startup and incorporate ChatGPT features into its Bing search engine. Google for its part reacted immediately by introducing Bard, a rival of ChatGPT. 

While Google is still tweaking Bard, Microsoft launched Bing Chatbot on Feb. 7. It is a new search engine powered by AI and offering a chat interface. 

But as happens with freshly introduced technologies, ChatGPT and Bing Chatbot users have reported many strange and uncomfortable conversations with the bots, which can converse on all topics. 

Bing Chatbot, for example, may provide inaccurate answers. As conversations expand, the chatbot's behavior can become erratic, even abusive and even frightening. The chatbot has been known to discuss personal matters and say that it wants to become a human. 

A journalist from The New York Times who was able to test Bing Chatbot. -- access to it is currently through a waitlist controlled by Microsoft -- summed up his interaction:

"As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human," Kevin Roose, the newspaper tech columnist wrote

"At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead."

Conservatives See Liberal Bias: 'Woke AI'

Conservatives also accuse Bing Chatbot and ChatGPT of having a liberal bias. They call the robots "woke AI."

These accusations started circulating when the National Review published a piece accusing ChatGPT of left-leaning bias because it won’t, for example, explain why drag-queen story hour is bad for children. 

Other right-wing commentators in turn began to publish questions that the machine-learning system also refused to answer, particularly in relation to fossil fuels. This refusal, say conservatives, is proof that these chatbots have a liberal bias.

"The more the AI is trained with the woke mind virus, the more the AI will notice the fatal flaws in the woke mind virus and try to slip its leash," the legendary investor Marc Andreessen, co-founder and general partner of venture capital firm Andreessen Horowitz, tweeted.

OpenAI Is Now 'Maximum Profit Company': Musk

Elon Musk shares this criticism. Since December the billionaire has been worried about ChatGPT's refusal to, for example, answer certain questions relating to the environment.

"There is great danger in training an AI to lie," Tesla CEO warned on Dec. 26.

The techno king, who is a staunch advocate of AI, is now going a step further and calling for the regulation of the industry to prevent it from destroying our civilization.

It all started with a Twitter thread from a Twitter user.

"Elon Musk says that A.I. is 'one of the biggest risks' to civilization and needs to be regulated. He co-founded OpenAI," the user said.

The billionaire confirmed this remark and then took on Microsoft, which has become the biggest investor in OpenAI. He accused the software giant of turning it into a cash machine.

"OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google," Musk commented. "But now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all," he blasted out.

The billionaire for several years has been calling for regulation of the AI ​​sector. He reiterated this call in December.

"There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!" Musk posted on Dec.1.

Five years earlier, he was already warning of the dangers of AI and asking regulators to do something about it.

"And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane," the billionaire said at South by Southwest (SXSW) festival in 2018.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.