A major breakthrough in the path to superintelligence is rumored to be the cause of an HBO-esq drama that nearly led OpenAI to collapse last week. The board of the AI lab is said to have been spooked by the new AI, fired the CEO, and triggered nearly a week of very public laundry washing over the future direction of the company.
One week is a long time in tech, but for OpenAI’s “out again then in again” CEO Sam Altman it must feel like a lifetime. First, he was fired by the board and then hired by Microsoft to head up a new AI division. Not long after that, almost all of OpenAI’s workforce threatened to quit and Altman now finds himself back in charge of the company he founded in 2015.
So what exactly led to such an extreme reaction from the board? Speculation abounds and ranges from issues with Sam Altman not sharing information, to the most recent — a major breakthrough in AI that could allegedly destroy humanity and a desire to put a pause on development.
Project Q* and AGI
The primary goal of OpenAI since its inception has been the development of artificial general intelligence (AGI), often referred to as superintelligence or mental capacity that exceeds that of the human. Just this month Altman said in an interview that GPT-5, the AI lab's next-generation large language model, could be a form of superintelligence.
A report by Reuters suggests that a new breakthrough, spearheaded by OpenAI Chief Scientist Ilya Sutskever — who initially supported Altman’s firing before expressing regret at his actions and asking for him to return — allows the company to use synthetic data to solve problems AI has never previously experienced.
Named Project Q*, this new way of training and working with artificial intelligence was first disclosed to the board shortly before it took action against Altman. It's thought that the impact of Q* is in improving the ability of AI to handle math problems, seen as a frontier in generative AI development.
Solving the math problem
While it is good at regurgitating written language and even visuals, solving problems with only a single right answer is harder. AGI will need to generalize, learn, and comprehend problems and do so both faster and better than humans. Doing so will require a deep understanding of mathematics and apparently, that is where OpenAI's team made a major breakthrough.
A number of staff researchers wrote to the board warning of the discovery and the fact it could threaten humanity if unchecked, according to the Reuters report. The authors of the letter remain anonymous but it is thought this triggered the board into action against Altman.
Other concerns in the letter to the board related to the speed at which OpenAI was commercializing advances before understanding the consequences. It suggested a need to take stock of developments and pause efforts to make money from the AI tools being built.
A company of two halves
OpenAI has long been a company of two halves, battling between its slow and steady drive for safe AGI on one hand and a desire to capitalize on and release AI advances on the other.
The success of ChatGPT, ever-increasing investment from Microsoft, and the impact this has had on the entire tech sector had given Altman, and the fast-to-release side of the business more power in recent months.
There is an assumption that the firing by the board was part of an attempt to steer things back towards a safer approach. However, the change in the board makeup, the return of Altman, and greater involvement from Microsoft are likely to lead to an even faster drive to commercialize.
What happens next?
In the future, there will be a Netflix documentary on the entire saga — possibly even written by this superintelligence — but until then we’re left with speculation and rumor over exactly what led to the near collapse of one of the largest, and most prominent players in the AI space.
I for one will just enjoy using ChatGPT and making the most of my freedom before Skynet takes over and I’m forced to write for the good of our AI overloads every day.