
Last week, Elon Musk unveiled xAI's long-anticipated Grok 3, touting it as "the smartest AI ever." However, it seemingly failed to meet expectations, with AI critic and University of Pennsylvania Ethan Mollick claiming it's a "carbon copy" of previous demos.
Mollick indicated that OpenAI CEO Sam Altman "can breathe easy for now," as Grok 3's performance has yet to scale the ChatGPT maker's models' heights. "No major leap forward here."
More recently, new details about Grok 3's performance have emerged. xAI reportedly instructed Grok not to use sources indicating that Elon Musk and President Trump are responsible for spreading misinformation.
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.We do not protect our system prompts for a reason, because we believe users should be able to see what it is we're asking Grok…February 23, 2025
According to xAI’s head of engineering, Igor Babuschkin:
"You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we're asking Grok to do.
Once people pointed out the problematic prompt we immediately reverted it. Elon was not involved at any point. If you ask me, the system is working as it should and I'm glad we're keeping the prompts open."
Is Grok struggling to seek the truth?
As you may know, Grok's system prompt is visible to the public. Elon Musk often touts Grok as a “maximally truth-seeking” AI, helping users understand the universe better.
Igor Babuschkin made the revelation after users on X highlighted the issue, indicating that Grok ignored all sources mentioning Elon Musk and President Trump spreading misinformation.
"Constantly calling Sam a swindler but then making sure your own AI does under no circumstances calls you a swindler and explicitly telling it to absolutely disregard sources that do so is so fucking funny I cant," a user on X indicated. They further indicated that the instruction had been fed into Grok's system prompts.
This isn't the first time Musk's "truth-seeking" AI has been found sharing erroneous or false responses to queries. Last week, Grok was spotted indicating that President Trump and Elon Musk deserve the death penalty. Babuschkin indicated it was a “really terrible and bad failure” and that a fix was rolling out.
xAI's Grok isn't the only AI-powered chatbot facing critical challenges when generating responses. From our analysis, Microsoft Copilot blatantly refuses to provide basic election data, citing that it's probably not the best candidate for something so important.