OpenAI Kills Arrakis

The reasons for dropping Arrakis can be aplenty. Let's see a few here
OpenAI Kills Arrakis
Listen to this story

OpenAI is getting ready for its first-ever DevDay conference. Moreover, it might have even cracked the AGI code with the recent rumours of Jarvis. But on the other hand, it is reportedly killing one of its other dear projects, codenamed Arrakis, as it did not live up to the company’s expectations during training.

Undoubtedly, OpenAI’s models have to live up to the standard of ChatGPT, which might be a little too high even for them. Unlike GPT-4, which is huge in size and more powerful than its predecessor GPT-3.5, Arrakis was expected to be smaller and allow the chatbots to run more efficiently and less expensively. 

This was probably in line with the release of Meta’s LLaMA and Llama 2, which even Microsoft, the biggest backer of OpenAI, had started using in many of its products and services. But the news about OpenAI building Arrakis started long before Llama was ever in the picture, ever since they started training GPT-4. But now, The Information reports that they have scrapped the model.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Why, though?

What if Arrakis actually has a lot of flaws? Gary Marcus points out after the report that Sam Altman might actually be right. “They might have decided that GPT-5, if it was simply a bigger version of GPT-4, would not meet expectations, and that it wasn’t worth spending the hundreds of millions of dollars required, if the outcome would only turn out to be disappointing, or even embarrassing.”

Arrakis, though, should not be compared with GPT-5. OpenAI has not made an official announcement about the release of GPT-5, and Altman has said that they are not training it, though they have filed a trademark for it, the Arrakis model was always expected to be a smaller one. The reasons can be that the company is shifting its focus on building AI models for wearables and smart devices, instead of just chatbots. 

The reasons for dropping Arrakis can be plenty. One of them can be the price for training such AI models, for which OpenAI has been reportedly spending almost a million dollars each day. Or the other one can be that the model is simply not good. Or, they want to keep it within themselves and use it in their upcoming products. 

If the company decides to shift its focus on smaller models again, it has to weigh the value of training versus the benefit that it would give to them. At the same time, the loss of time and resources has also disappointed some of the Microsoft employees, according to the report, as they have been paying OpenAI to develop smaller models for a long time. 

Nonetheless, the company has been generating revenue and is back on track, according to Altman. It is expected to generate an annual revenue of $1.3 billion this year, compared to $28 million last year. Instead of scrapping Arrakis altogether, the company can integrate it within Gobi, which is expected to be something similar to what the company has released with the GPT-4 Vision model.

But, there are other concerns

This can also be a major setback for the company when it comes to adoption of their products. Though enterprises are heavily using GPT-4, the need for smaller models is on the rise. Some of the models are even outperforming OpenAI’s capabilities on various fronts. Even Microsoft has worked on smaller LLMs such as Orca, that run comparatively cheaper for the company. 

On similar lines, a recent Microsoft research also highlights some trust issues with GPT-3.5 and GPT-4. Researchers say that GPT-4 can be easily jailbroken with prompts and be led to wrong hands. Interestingly, this can also include Arrakis models as the research revolves around the same time. 

But according to the researchers, the bugs found in the models were reportedly fixed before the models were released. Possibly, it can be the reason that the models that OpenAI is dropping now are heavily filled with these bugs, given the smaller size of the models. 

It seems like even though OpenAI is riding high on the revenue waves at the moment, there indeed are chances that the company might have to drop a smaller model soon. Otherwise, Microsoft might have to steer some other route and find a different island to land on.

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR