Listen to this story
The Silicon Valley was still reeling with the sudden and unexpected firing of Sam Altman, co-founder and CEO of OpenAI, by the company’s board of directors, when in a surprising turn of events, president and chairman of the board Greg Brockman too resigned.
This led to the appointment of Mira Murati, the chief technology officer of OpenAI, as the interim CEO. Murati was the only person who found out the night before about Altman’s termination and Greg’s ouster from the board while retaining her role in the company. The rest of the management team was informed shortly after the dismissal.
Following the news, Murati sent a staff note to her employees encouraging them to concentrate on their tasks. Murati said that she is “honoured and humbled” in assuming the leadership position and emphasised the importance of maintaining focus, determination, and adherence to core values, as reported by Bloomberg.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
With Albanian roots and a San Francisco upbringing, 35-year-old Murati completed her bachelor’s degree in mechanical engineering from Dartmouth. She has served in important roles at Goldman Sachs, French aerospace company Zodiac Aerospace, and Elon Musk’s Tesla, where she served as a senior product manager for the ‘Model X’ vehicle.
After a stint as the VP of product and engineering at Leap Motion, she joined OpenAI in 2018 as the VP of applied AI and partnerships, eventually rising to CTO and now the interim CEO position.
Murati played a pivotal role in the release of groundbreaking AI projects like DALL.E 2, DALL.E 3, ChatGPT, GPT-4, and more. Although she usually operates behind closed doors, Murati has started making public appearances, discussing the implications of AI tools and advocating for responsible AI regulation, emphasizing the need for broader input beyond tech companies in shaping ethical policies.
Murati had previously said in an interview that during her time at Leap, she recognised that AGI would be the ultimate and most significant technological milestone, and believed that OpenAI was the sole organisation at that time committed to advancing AI capabilities while also ensuring responsible development — she wanted to join them.
Difference in Approach
Murati, who has been relatively less visible in interviews like her colleagues, has recently begun making more appearances. This has provided us with insight into her perspectives on navigating the AI landscape, giving us a glimpse into her beliefs in that space.
At an interaction with Microsoft CTO Kevin Scott in July, Murati was vocal about the prevailing uncertainty surrounding LLMs and the necessity for clear guidance and decision-making processes in the field, raising how one should determine what aspects of AI to prioritise, work on, release, or position effectively.
“When we began building GPT more than five years ago, our primary focus was the safety of AI systems,” said Murati.
Emphasising the risks associated with allowing humans to directly define the goals or objectives for AI systems, as this approach can involve using complex, opaque processes for critical functions, potentially leading to serious errors or unintended consequences, Murati and the team shifted their focus to using RLHF to ensure the safe and effective development of AI.
After developing GPT-3 and releasing it in the API, OpenAI was able to integrate AI safety into real-world systems for the first time. They used instruction-following models to take prompts from customers and generate feedback for the model to learn from. By fine-tuning the model on this data, they were able to build instruction-following models that were much more likely to follow the intent of the user and do what was actually wanted.
For Murati, this was a significant step forward because AI safety was no longer just a theoretical concept but became practical in the real world.
Karma Hits Back
Following the removal of Altman, Ilya Sutskever, co-founder and chief scientist at OpenAI, who is being deemed to be the brain behind the sudden removal of Altman, addressed his employees. Rejecting the notion that Altman’s removal was a “coup” or “hostile takeover”, as it is being said, he acknowledged that there were genuine concerns within the organisation regarding the prioritisation of commercialising AI technology, potentially overlooking safety precautions, as reported by The Information.
The ongoing debate at OpenAI revolves around balancing the pursuit of AGI with safety concerns and avoiding a sole focus on business interests, with Altman and Brockman on one end and Sutskever on the other.
For Sutskever, the significance of AGI lies as a practical benchmark for AI capabilities. Both appear cautious and considerate of AGI’s potential impact on society, but without compromising safety.
Echoing similar views, Murati is strongly in favour of its development with a focus on ensuring that AGI benefits humanity. Both Murati and Sutskever believe in the importance of AGI development while ensuring its benefits to humanity.
Murati had earlier mentioned that even when GPT-4 was being built, there was a strategic decision to refocus on improving ChatGPT’s alignment and safety. The aim was to actively involve researchers and gather their feedback to enhance the reliability, robustness, and alignment of ChatGPT.
Altman’s announcements at DevDay, especially those involving Microsoft, led to a ChatGPT outage, possibly due to a DDoS attack. OpenAI also temporarily halted new ChatGPT Plus sign-ups due to a surge in usage following DevDay.
In essence, Sutskever acknowledged that some employees at OpenAI were worried that under Altman’s leadership, there might have been a strong emphasis on rapidly turning AI developments into profitable ventures. This intense focus on commercialisation could have potentially sidelined or compromised the rigorous safety measures and precautions that are essential in the development of AI systems.
Altman initially founded OpenAI as a non-profit but later introduced a for-profit entity to secure AI research funding, a move seen as conflicting with the company’s commitment to safety. Along with this, his approach of moving from open to closed source has defied the company’s original vision, a stance about which both Brockman and he seem to be at odds with Sutskever.
Elon Musk, who helped in cofounding OpenAI, soon left when Altman focused more on commercialisation instead of open-sourcing AI, throwing away its original ideology of openness. As a firm believer in open-sourcing AI who is planning to open-source xAI’s chatbot Grok, Musk had earlier said that the closed-source policy will bring “bad karma” for OpenAI.
On the other hand, Musk praised Sutskever as a “brilliant, good human, and a linchpin of OpenAI”.
At a time when three more senior OpenAI researchers, namely Jakub Pachocki, Aleksander Madry and Szymon Sidor, resigned in response to Altman’s termination and Brockman’s resignation, it becomes increasingly important for Mira Murati to shape OpenAI future trajectory as the interim CEO by leveraging her extensive experience in AI, advocacy for responsible AI, and emphasis on AGI development for the benefit of humanity.
Except for the internal memo, Murati has yet to make any public statements on her new role.
Read more: Is this the end of OpenAI as we know it?