Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

What Elon Musk is really building inside his ChatGPT competitor xAI

(Credit: Tolga Akmen—EPA/Bloomberg/Getty Images)

In 2018, Elon Musk walked away from OpenAI, which he had cofounded three years earlier as a research outfit dedicated to building safe AI. He said it was due to a conflict of interest with Tesla’s rival AI efforts; subsequent reports say he lost a power struggle. Either way, Musk departed with his billions, and OpenAI fell into Microsoft’s lifesaving embrace.

Five years on, Musk is back in the AI game. In July, he announced the formation of xAI—which, despite its use of Musk’s favorite letter, is a separate company from X Corp. “The goal of xAI is to understand the true nature of the universe,” the announcement began. The numbers in the date of the announcement (7/12/23), Musk tweeted, totaled 42: the figure which, in the comic science fiction classic The Hitchhiker’s Guide to the Galaxy, represents the frustrating answer to “the ultimate question of life, the universe, and everything.” 

These lofty goals are deeply intertwined with Musk’s idiosyncratic vision of AI safety, in which an inquisitive superintelligence will hopefully decide people are so interesting that it must keep them around. Competitors like OpenAI and Google DeepMind are trying to achieve AI safety by promoting “alignment” with human goals and principles, but Musk believes that trying to instill certain values in an AI increases the odds of the AI adopting the opposite values, with the risk of disastrous results.

“I think the safest way to build an AI is actually to make one that is maximally curious and truth-seeking,” he said two days after xAI’s announcement, at a Twitter Spaces event alongside the 11 all-star (and all-male) AI engineers he hired as the company’s starting team—each reportedly received 1% equity in the Musk-funded venture. Rather than let other companies define the future, Musk vowed to “create a competitive alternative that is hopefully better than Google DeepMind or OpenAI-Microsoft.”

The world got its first glimpse of this alternative in early November, with the unveiling of an AI chatbot called Grok. Early demonstrations showed a chatbot defined less by a connection to cosmic truths than by a willingness to engage in the snark and vulgarity that rival products try to avoid. Musk also revealed that Grok would be a subscription driver for X (formerly Twitter), serving as a feature of the social network’s Premium+ tier while using X’s tweets as an information source. Missing was any indication of how the wisecracking AI bot fits into the broader Musk portfolio of Tesla autonomous driving technology, humanoid Optimus robots, and Neuralink human-machine brain interfaces, raising questions about the seriousness, and significance, of xAI. 

The Grok website on a smartphone arranged in New York, US, on Wednesday, Nov. 8, 2023. Elon Musk revealed his own artificial intelligence bot, dubbed Grok, claiming the prototype is already superior to ChatGPT 3.5 across several benchmarks. Photographer: Gabby Jones/Bloomberg via Getty Images

Rivalry (or perhaps revenge) is a clear motivation for Musk. Poaching half a dozen Google DeepMind luminaries is just the latest chapter in a feud that goes back to his discussions with Larry Page in 2015, shortly after Google acquired DeepMind. Musk has claimed the Google cofounder accused him of being “speciesist” for thinking of potential silicon-based life forms as inferior to humans. Alarmed by Page’s comments and what Musk considered Google’s lax approach to AI safety, Musk cofounded OpenAI as a safety-driven counterweight to Google—only to see OpenAI become “frankly voracious for profit” after partnering with Microsoft.

“My sense,” said Steve Omohundro, a veteran computer scientist who just coauthored a paper on using mathematical proofs to mitigate AI’s negative effects, “is the reason he [founded xAI] is he’s pissed at OpenAI.” 

Ultimate truth-teller, protector of humankind, tool of revenge: These are the roles that Musk likely wants his new AI to play. But the team he’s assembled may have very different—though no less dramatic—aims.

Mathematical reasoning

At that inaugural Spaces event in July, all of Musk’s new hires spoke, but none echoed or even addressed their new boss’s theory about a maximally curious superintelligence preserving humanity. What they did talk about was math. Two common strands run through xAI’s initial staffing lineup: Most of the team comes from Google DeepMind, and most have a hard-core mathematics and/or physics background.

Therein may lie the truth—a concept not always associated with today’s “hallucinating” generative AI models. “Mathematics is the language underlying all of our reality,” said Greg Yang, a team member who came over from Microsoft, during the session. “I’m also very excited about creating an AI that is as good as myself or even better at creating new mathematics and new science that helps all achieve and see further into our fundamental reality.”

Being given an opportunity to unlock new mathematical and physical truths—in a lean, bureaucracy-free company bankrolled by the world’s richest man—was a clear draw for xAI’s team. But the means of achieving that goal is an attractive end in itself, as team member Christian Szegedy (an ex-Googler) explained at the event: “Mathematics is basically the language of pure logic, and I think that mathematics and logical reasoning at the high level will demonstrate that the AI is really understanding things, not just emulating humans.”

Solve problems and you get both the answers and confirmation that your AI can think for itself, unlike models such as OpenAI’s GPT-4 that essentially regurgitate their training material. Or so goes the theory—there are various opinions regarding the threshold an AI must cross to become a general-purpose, thinking AGI, or artificial general intelligence.

“My sense is the reason he founded xAI is he’s pissed at OpenAI.”

Steve Omohundro, longtime computer scientist

“I would say the holy grail of AI systems is reasoning, and probably the place where reasoning is most evident is in mathematical inquiry,” said University of Sydney mathematics professor Geordie Williamson, who recently collaborated with Google DeepMind on refining old mathematical conjectures.

AI’s ability to make physics breakthroughs is “already happening to some extent,” Williamson added, with neural networks helping to figure out things like the precise boundaries between water’s rarer states. “We have this new hammer called a neural net, and we mathematicians and physicists are going around banging it,” he said. “It’s a new tool that we’re using, but we’re still in the driver’s seat.”

But while Williamson says he has seen “glimpses” of reasoning in DeepMind’s biology-focused systems, he says we are “miles away” from definitively reaching that milestone. “Generative AI is amazing, but I haven’t seen any evidence that we’re [seeing] reasoning,” he said.

Musk himself has given mixed signals regarding the AGI inflection point. At the xAI launch, he said he agreed with futurist Ray Kurzweil that AGI would likely emerge around 2029, “give or take a year.” But in an October Tesla earnings call, he described his cars’ AI system as “basically baby AGI—it has to understand reality in order to drive.”

Will ‘truth’ really make AI safer?

While the pursuit of validating AGI via math is a concept that has wider momentum in the industry, Musk’s theory about the inherent safety of a truth-seeking AI draws skeptical responses from many experts.

“It’s wishful thinking in a way. We don’t have proof of that,” said José Hernández-Orallo, an AI professor at the Valencian Research Institute for Artificial Intelligence, pointing out that there isn’t even strong evidence for smart humans being less likely to commit crimes. “Exploring the idea is fine, but how are you going to test that idea?”

“I’ve heard [Musk] say stuff like that, and it doesn’t really make much sense,” said Omohundro, the computer scientist. “Truth-seeking is very valuable for becoming more intelligent, and it would probably be helpful if you wanted to help humanity ... to know what the truth is, but I don’t see a reason why knowing truth is necessarily going to dispose you to be pro-human.”

As xAI has designs on eventually establishing “truth” beyond the mathematical and physical arenas, some may be concerned by its links to X, formerly Twitter. Musk says public tweets constitute part of Grok’s training dataset and also feed it up-to-date information—but X is a notorious haven for disinformation, to the extent that it’s being investigated by the EU for breaking new rules around online content. It’s also not clear how this data sharing will stay on the right side of the EU’s data-protection rules, as X’s privacy policy makes no mention of passing user data to another company for AI-training purposes.

At the launch event, Musk said tweets provided “a good dataset for text training and also for image and video training.” He also railed against what he characterized as the illegal scraping of that dataset by other AI companies, saying: “I guess we will use the public tweets for training as well, just like everybody else has.”

Musk has also repeatedly attacked rivals such as OpenAI and Google for creating guardrails that stop their AI models from emitting offensive responses. To Musk, doing so is essentially teaching an AI that it’s okay to lie—a lesson he says is inherently dangerous.

Because xAI is not beholden to public shareholders or “non-market-based ESG incentives,” Musk said its AI has more freedom to give answers that are accurate but controversial. “They won’t be politically correct at times, and probably a lot of people will be offended by some of the answers, but as we try and optimize for truth with the least amount of error, we’re doing the right thing,” he said.

A road to Tesla?

With the recent unveiling of Grok—which, notably, occurred just days ahead of OpenAI’s big developer conference—xAI’s emphasis has been squarely on the model’s “rebellious” nature and its predilection for Musk’s snarky brand of humor. There’s been no attempt yet to pitch it as a tool for businesses and little to suggest applications beyond bantering chatbots. 

Asked in July whether he intended xAI to make products for the general public or for business customers, Musk vaguely answered that his team’s goal was “to make useful AI, I guess,” with likely users including “people and consumers and businesses or whoever.” 

Taking on Google and OpenAI (and Amazon-backed Anthropic, and Meta) will require a lot of costly computing resources; OpenAI’s GPT-4 large language model cost over $100 million to train. Earlier this year, the Wall Street Journal reported Musk had “snapped up much of Oracle’s spare server space” for his AI project. Oracle cofounder Larry Ellison confirmed the arrangement in September. Insider also reported in April that Musk had bought 10,000 high-end graphics processors for an AI effort within Twitter.

Tesla can also offer a ton of compute in the form of its Dojo supercomputer, though there are issues with the idea of xAI tapping in. For one, the first iteration of Dojo—which is designed to process and recognize images for the benefit of Tesla drivers—has memory-bandwidth limitations that make it unsuitable for running GPT-style large language models. Musk claims Dojo 2 will do a better job on that front.

The second problem is that, as Tesla is a publicly traded company, “any relationship with Tesla has to be an arm’s-length transaction,” Musk noted at the launch event. To date, there is no evidence in Tesla’s filings to suggest xAI is using its resources.

However, Musk added: “Obviously it would be a natural thing to work in cooperation with Tesla, and I think it would be of mutual benefit to Tesla as well in accelerating Tesla’s self-driving capabilities, which is really about solving real-world AI. I am feeling very optimistic about Tesla’s progress on the real-world AI front, but obviously, the more smart humans that help make that happen, the better.”

Even if xAI doesn’t end up protecting humankind while answering the ultimate question of life, the universe, and everything, perhaps it can at least help make a safer ride.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.