Ilya Sutskever, a co-founder of OpenAI, has launched a new AI company named Safe Superintelligence Inc., focusing solely on the safe development of superintelligent AI. Sutskever, who recently left OpenAI after an attempt to oust CEO Sam Altman, aims for his new venture to prioritize safety over commercial pressures. The company, which operates out of Palo Alto and Tel Aviv, promises to avoid distractions related to management and product cycles, concentrating exclusively on the secure advancement of AI technology.
Safe Superintelligence Inc. is dedicated to developing AI systems that surpass human intelligence while ensuring that safety and security measures are at the forefront. This commitment reflects Sutskever’s concerns about the balance between AI development and safety, which he felt was not adequately addressed during his tenure at OpenAI. He and his co-founders, Daniel Gross and Daniel Levy, emphasize that their focus will remain insulated from short-term commercial interests.
Sutskever’s departure from OpenAI followed a controversial boardroom attempt to remove CEO Sam Altman, which he later regretted. This internal conflict highlighted concerns about whether OpenAI’s leadership was prioritizing business opportunities over AI safety. Sutskever’s move to establish Safe Superintelligence underscores his commitment to ensuring that AI safety takes precedence in the development of advanced AI systems.
Following Sutskever’s exit, OpenAI saw additional departures, including that of Jan Leike, who criticized the organization for its approach to AI safety. Although OpenAI established a safety and security committee, it has been criticized for being composed mainly of company insiders. This backdrop provides context for Sutskever’s new venture, aimed at addressing these very concerns and prioritizing the secure development of superintelligent AI.
Reference: