ChatGPT is a star -- but not with everyone.
The conversational chatbot, developed by the OpenAI startup co-created by Elon Musk, the venture-capital investor Peter Thiel and others, has soared in popularity less than three months after it launched.
ChatGPT, which is based on GPT-3.5 technology or large language models, has revolutionized information gathering on the internet. The chatbot, which now counts Microsoft (MSFT) as its biggest investor, provides personalized answers to queries received.
Its answers are more human-like regardless of topic. While these aspects are welcomed by many users worldwide, the chatbot has also taken plenty of flak.
These criticisms have propelled it to the center of the culture wars, at root the clash between traditional values and progressive values, the battle between conservatives and liberals. In the U.S. this amounts to the clash between Republicans and Democrats, conservatism against wokeism.
ChatGPT Lists Musk as 'Controversial'
The sharpest charge comes from conservatives who feel that the chatbot is muzzling and censoring them.
This results, they say, in ChatGPT often refusing to answer certain questions related to, for example, environmental problems, gender identity, drag queens, and race.
They charge that ChatGPT outright censors answers when it's confronted with queries about the negative effects of a central theme with progressive values. They say it refuses to respond to questions about positive impacts of subjects the left hates.
In recent days the conservatives have intensified their attacks against ChatGPT. They claim, with supporting screenshots, that the chatbot does not say good things about conservative figures and leaders or people who are perceived as such.
"ChatGPT lists Trump, Elon Musk as controversial and worthy of special treatment, Biden and Bezos as not. I've got more examples. @elonmusk," Isaac Latterell, former Republican member of the South Dakota House of Representatives, tweeted on Feb. 19,.
"ChatGPT 'AI' won't define a woman, praises Democrats but not Republicans, and claims nukes are less dangerous than racism," according to another tweet posted on Feb. 11. with a link to an article from the British tabloid Daily Mail.
"Extremely concerning," said Elon Musk, the CEO of Tesla (TSLA), who bought Twitter to make it a platform for free speech and a bastion of conservatives.
ChatGPT Is 'a Democrat'
These and other examples lead conservatives to conclude that ChatGPT is a Democrat. That's the assertion of investor David Sacks, a friend of Musk.
"I think there is now mounting evidence that this safety layer program by OpenAI is very biased in a certain direction," Sacks said in a short video posted on Twitter. There's a very interesting blog post called 'ChatGPT is a Democrat,' basically laying this out. There are many examples."
"The AI will give you a nice poem about Joe Biden. It will not give you a nice poem about Donald Trump. It will give you the boilerplate about how it can take controversial or offensive stances on things. so somebody is programming that and that programming represents their biases.
"And if you thought Trust and Safety was bad under Vijaya Gadde and Yoel Roth, just wait until the AI does it because I don't think you're gonna like it very much."
Gadde and Roth are, respectively, the former head of legal, policy, and trust and head of trust and safety at Twitter. They were either fired by Musk or left the company after he joined.
The blog post that Sacks talks about can be found here.
Musk hasn't been shy about reacting to his pal's accusations and his friend's new outcry over ChatGPT. He believes that this bias is a "major problem."
"Major problem," the billionaire commented.
The serial entrepreneur, who also has more than 129.3 million followers on Twitter, also wanted to promote Sacks's remarks by recommending him to users of the platform.
"Very important thread," he said with a link to Sacks's profile.
After choosing to remain silent to attacks from all sides, OpenAI published the rules governing ChatGPT on Feb. 16.
"Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress," OpenAI wrote in a blog post.
"Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features.
"We’re going to provide clearer instructions to reviewers about potential pitfalls and challenges tied to bias, as well as controversial figures and themes."