OpenAI announces the establishment of a safety and security committee alongside the initiation of training for a new AI model to replace GPT-4, the foundation of its ChatGPT chatbot. The committee is tasked with advising the board on pivotal safety and security decisions concerning the company’s projects and operations. This move follows heightened scrutiny surrounding AI safety, triggered by the resignation of researcher Jan Leike, who criticized OpenAI for prioritizing product allure over safety concerns, leading to subsequent resignations, including co-founder Ilya Sutskever.
Despite the controversy, OpenAI asserts its commitment to advancing AI capabilities and safety standards, claiming industry leadership in both aspects. The company embarks on training its next frontier AI model, indicative of its dedication to pushing the boundaries of AI technology. OpenAI acknowledges the importance of a robust debate surrounding AI safety at this juncture, signaling an openness to discourse and feedback.
Frontier AI models, the most advanced in the field, are pivotal components of OpenAI’s research and development endeavors. The safety committee comprises key figures within the company, including CEO Sam Altman and Chairman Bret Taylor, alongside technical and policy experts. Their immediate focus will be to evaluate and enhance OpenAI’s safety protocols, with recommendations expected within 90 days for further board consideration and public disclosure.