Listen to this story
For anyone following the OpenAI train, last week the spotlight shifted to the CTO of the AI startup, Mira Murati. Murati’s name seems to have popped out of obscurity with several Indian media outlets running headlines claiming her origin as Indian. Murati, however, is of Albanian descent. Born to highschool teachers from Vlore, Murati herself was born and raised in San Francisco.
Education and career
The 35-year old went to Dartmouth College to finish her B.E. in Mechanical Engineering after which she interned as a summer analyst at Goldman Sachs. Murati then went on to work as an advanced concepts engineer with French aerospace company ‘Zodiac Aerospace’, following which she was appointed as a senior product manager at Tesla for their ‘Model X’ vehicle. Murati’s experience in both business and engineering had clearly come in handy.
Post this, Murati had a two-year stint as the VP of Product and Engineering at Leap Motion, a software and hardware company that manufactures controllers which enabled users to manipulate digital objects with their hand motions when connected with a PC or a Mac. During the beginning of Murati’s tenure at Leap Motion, the company had launched new software designed for hand tracking in VR.
In 2018, Murati joined OpenAI as the VP of Applied AI and Partnerships. By this time, OpenAI was already ramping up research work and drawing up bigger bills than they could eventually handle. In 2019, OpenAI shapeshifted from a non-profit organisation to a profit-making entity albeit with a cap.
Murati then moved up the ladder to become a senior vice president of Product and Partnerships in the company before becoming the Chief Technology Officer ten months ago. Since Murati’s appointment as CTO, OpenAI has released some of its buzziest AI playthings like DALL.E 2 and ChatGPT that have found their way to the public.
There’s little to no information about Murati’s personal life aside from the fact that she is a fan of the sci-fi classic ‘2001: A Space Odyssey,’ poet Rainer Maria Rilke and the rock band, Radiohead.
On tougher questions around AI
While most of her work has been behind closed doors, with the growing hype around generative AI tools and OpenAI’s entry into the cultural zeitgeist, Murati has started to appear more in the public eye recently. She appeared on ‘The Daily Show’ with Trevor Noah to discuss the implications of these powerful AI tools four months back.
In 2021, the team hired Jan Leike, an ML researcher, who now leads their alignment team signalling an increased focus on reinforcement learning from human feedback (RLHF) training.
Murati has been working closely with Leike, as is evident on Twitter, and has also discussed the importance of incorporating feedback during training AI models. In a discussion eight months back with CogX, she explained, “We want these models to follow our explicit instructions but we also want them to follow our implicit instructions, so we don’t want them to produce any stereotypes or generate harmful responses. We want them mainly to align with what we mean or want, which can be quite fuzzy. In order to get there, we need human feedback in the loop for the model to make the intent crisper.”
In the same panel discussion while hailing AI’s overall impact, she had frankly admitted that she did not know if AI tools would eventually end up replacing human job roles. “We’re not sure if AGI will replace us completely or augment what we’re doing. It’s unclear what will happen with jobs, what governance or economic systems will look like. But I think it can have a very positive impact overall in the sense that it can solve very difficult real-world problems like climate change or in the medical treatment of diseases.”
In the interview on ‘The Daily Show’, Murati however took a more noncommittal approach that CEO and founder Sam Altman has seemingly taken when it came to heavy ethics-related concerns around their tools. Murati called OpenAI’s applications ‘human helpers’ and said that they would become just as ‘how society shaped them’.
OpenAI has always had a frank approach to admitting what their tools can or can’t do, which Murati has clearly adopted as well. However, she has also accepted that some of these questions, like the exact implications of AGI on humans, may be too big for even too open-ended to answer now.
In a recent interview with Time magazine, Murati called for regulation in AI saying that it wasn’t ‘too early’ for policymakers to get involved in AI. “It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible. But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else,” she explained.