OpenAI Researchers Warn of Potentially Dangerous AI Discovery, Prompting CEO’s Firing and Employee Backlash

Ads

According to reliable sources cited by Reuters, a group of researchers from OpenAI issued a letter to the company’s advisory board, sounding an alarm about a “powerful discovery” in the field of artificial intelligence (AI) that could potentially pose a threat to humanity. This warning came just days before the triumphant return of OpenAI’s CEO, Sam Altman, who had previously been let go due to innovations that led up to his firing, particularly the card and the AI algorithm behind ChatGPT. The news of Altman’s dismissal sparked an outcry, with over 700 OpenAI employees threatening to resign and migrate to Microsoft in solidarity with their ousted leader.

Recognizing the gravity of the situation, major tech companies swiftly took action to safeguard their investments and prevent any reputational fallout from the growing tension within OpenAI. It is believed that Altman’s termination may have been influenced, at least in part, by the contents of the letter, which expressed concerns about the premature commercialization of AI advancements without adequate understanding of the potential consequences. However, no copy of the letter was made available for Reuters to analyze, and the employees responsible for its creation chose not to respond to requests for feedback.

Upon being contacted by Reuters, OpenAI initially declined to comment but later addressed the matter in an internal memo to employees, as well as in a letter to management, referencing a project dubbed “Q*.” A spokesperson for OpenAI emphasized that the message, conveyed by executive Mira Murati, simply alerted the team to certain media reports without further elaboration. Speculation arose among some OpenAI employees that the “Q*” project, pronounced as Q-Star, may represent a significant advancement in the company’s research into general artificial intelligence (AGI).

OpenAI defines AGI as systems capable of outperforming humans in most economically valuable tasks. According to insiders, the new model under development as part of the “Q*” project has demonstrated promising capabilities in tackling certain mathematical problems, attributed to its strong access to extensive computational resources. Although the model has only been tested using elementary school students’ mathematical calculations, researchers involved with the project maintain a sense of optimism for its potential.

Researchers assert that mathematics holds a crucial place in the progression of creative AI. While current generative AI models excel at writing and translating across languages by statistically predicting the next word, their responses to the same question may vary substantially. However, if an AI model can master accounting tasks, where there is typically only one correct answer, it signifies that the AI system possesses reasoning abilities comparable to those of humans. This breakthrough prompts researchers to anticipate new scientific studies utilizing this technological leap.

Artificial general intelligence (AGI) distinguishes itself by its capacity to generalize, learn, and comprehend, in contrast to a calculator or similar tools that can only perform a limited number of operations. The precise nature of the concerns regarding AI’s potential threats outlined in the letter to the advisory board remains undisclosed. The debate surrounding the dangers posed by superintelligent machines and their likelihood of choosing to harm humanity if given the opportunity has been a topic of discussion within the computer science community for some time.

Moreover, it has been confirmed through multiple sources that an “AI scientist team” at OpenAI exists. This team combines the efforts of the previously separate “Code Gen” and “Math Gen” teams, working toward enhancing existing AI models to bolster their reasoning capabilities and ultimately enable scientific breakthroughs—a pursuit described by one team member. Altman played a key role in transforming ChatGPT into one of the fastest-growing applications ever, attracting Microsoft’s investments and computing resources to facilitate progress toward AGI. During a recent demonstration, Altman unveiled an array of new tools and addressed a gathering of global leaders in San Francisco, expressing his belief in the imminent leaps in AI technology. He described his involvement in pushing the boundaries of ignorance and paving the path to discovery as the “professional honor of a lifetime.”

However, the following day, OpenAI’s leadership unexpectedly terminated Altman’s position, leaving many perplexed about the company’s direction and future endeavors in the field of AI.

TRENDING