Browse all

Why was Sam Altman fired and then rehired?

What is Q* and how it will influence the future of AI

Why was Sam Altman fired and then rehired? What is Q* and how it will influence the future of AI

Recently, OpenAI, the operator of ChatGPT, announced that Sam Altman will return as the CEO, following an agreement to replace almost all board members – the same body that had fired him not long ago. Altman, a prominent figure in AI, is credited with the success of OpenAI and ChatGPT. His abrupt dismissal, with minimal explanation, surprised Altman during a video conference with the board. Microsoft, owning 49% of OpenAI with a $13 billion investment, learned of the decision minutes before the public announcement. Founded in 2015 by Altman, Ilya Sutskever, Greg Brockman, and Elon Musk, OpenAI began as a non-profit initiative for AI research. In 2018, Altman expanded it to a commercial entity, backed by Microsoft. Altman's firing was initiated by the non-profit board, responding with limited transparency to shareholder queries, mainly handled by the corporate side. Post-firing, investors and employees pressured the board to rehire Altman, with most employees signing an open letter, threatening mass resignations. Microsoft swiftly hired Altman and another ousted executive, Greg Brockman, offering to take on the approximately 700 signatory employees.

Is Altman's dismissal related to Q*?

Officially, the board attributed Altman's removal to lack of transparency in certain decisions, hindering proper oversight. However, Reuters reported that days before Altman's firing, some researchers warned the board about concerns related to a specific project, Q* (pronounced Q-Star), a potential breakthrough in "general" artificial intelligence (AGI). AGI, resembling human cognitive abilities, is a sought-after milestone in AI, but its realization remains distant. ChatGPT, although successful in human-like responses, is considered "narrow" AI, efficient in a single task. Reuters revealed concerns within OpenAI about the implications of increasingly sophisticated, "general" AI systems, like Q*. OpenAI neither confirmed nor denied the Q* project but acknowledged an internal communication about it.

Does AI's future require more regulation?

Altman had previously faced accusations within the company of leveraging OpenAI's visibility for expansion without evaluating potential risks to people from evolving AI systems. Beyond Q* concerns, Altman's initial removal might stem from differing views on the future of AI. The non-profit side of OpenAI appears cautious about the speed of AI development, in contrast to Altman's more aggressive approach. With today's available technologies, genuine AI threats seem unlikely, but calls for more controls may intensify due to rapid progress, especially with potent systems like Q*, potentially deployed without adequate checks.