MITB Banner

Soon, UAE will Dethrone OpenAI

Interestingly, Falcon 180B ranks just behind OpenAI’s latest GPT-4 and is on par with the performance of Google’s PaLM 2

Share

Listen to this story

UAE’s Technological Innovation Institute (TII) yesterday released Falcon 180B, a highly scaled-up version of Falcon 40B. According to the official blog post, this is the largest open-source language model, boasting a staggering 180 billion parameters. 

According to TII, Falcon 180B is trained on 3.5 trillion tokens on 4096 GPUs simultaneously, using Amazon SageMaker for a total of ~7,000,000 GPU hours. To put it into perspective, Falcon 180B is 2.5 times bigger than Llama 2 and required four times more computational power for its training. It’s certainly intriguing how UAE’s TII manages to obtain such substantial computing power.

UAE has Oil Money 

The UAE, as an oil-rich nation, has ample financial resources at its disposal. According to a report hydrocarbons continue to play a critical role in the UAE economy, with 30% of the UAE’s GDP directly based on oil and gas industry and 13% of its exports. 

The UAE is allocating the money earned from oil to fund AI projects. Six years ago, they launched the National Strategy for AI 2031, aiming to make AI contribute significantly to their economy, targeting up to 13.6% of their GDP by 2030. 

In 2020, UAE government established ARTC (Advanced Technology Research Council) to promote scientific research and innovations in AI. A few months later, ARTC established TII which today is behind the creation of Falcon 180B. There is no doubt that UAE is bullish on investing in AI initiatives. In June, when OpenAI CEO Sam Altman visited Abu Dhabi, he praised the nation’s foresight in recognising the potential of AI, stating that the city “has been talking about AI since before it was cool”.

While the world has been struggling to procure NVIDIA GPUs, UAE secured access to thousands of NVIDIA chips, which it used to build the Falcon model in May. Moreover, the report added that UAE wants to control and own its own computational power and talent without depending on Chinese or Americans. There is no doubt that they have capital, energy resources and talent to do that.  

Similarly, Saudi Arabia also purchased no less than 3,000 H100 chips. These processors are valued at $40,000 each. The acquisition was facilitated through the public research institution, King Abdullah University of Science and Technology (KAUST). A little number-crunching and it becomes apparent that Saudi invested a staggering $120 million to secure this impressive array of GPUs. 

This is the reason why when the US tried to ban the export of AI chips to the middle-east nations, AMD and NVIDIA both raised eyebrows. All the major economies of the world right now are engaged in the LLM race which has led to cold war with the US trying its best to forbid domestic AI chip manufacturers form supporting their competitors.

Not only this, UAE’s G42 recently launched Arabic language AI model Jais which contains 13 billion parameters. Jais was created with the help of supercomputers produced by the Silicon Valley-based Cerebras Systems for which it had signed a $100 million deal with G42. As NVIDIA’s chips were short in supply, UAE was smart enough to seek alternatives. 

Moreover, G42  in 2021 raised $800 million from US tech investment firm Silver Lake, which has backing from Mubadala, the UAE’s sovereign wealth fund.

What about OpenAI?

Coming to OpenAI, the company’s progress is largely dependent on the multi-billion dollar investment it received from Microsoft at the beginning of the year. However, with the recent developments it appears that it has exhausted the investment. Recently, Altman posted on X that the company is not coming up with GPT-5 or GPT- 4.5 in the near future and asked people to calm down.

According to The Information report, OpenAI losses roughly doubled to around $540 million last year as it developed ChatGPT and GPT-4. According to reports, training GPT-3 with 175 billion parameters cost them more than $4 million. Now, with GPT-4 rumoured to have about 1.76 trillion parameters, the cost of building the model comes up close to $46.3 billion, assuming a linear increase in cost per parameter. Again, this is a simplified estimate, and the actual cost may vary based on various aspects, including research and development costs, talent, hardware improvements, and more. 

This explains why OpenAI has been shying away from releasing the multimodal capabilities of GPT-4 to the public, or disclosing the parameter size, which the team seems to be hiding deliberately to avoid unwanted attention. Who knows, maybe OpenAI fooled us all and we never actually got GPT-4. 

Altman previously had suggested OpenAI may try to raise as much as $100 billion in the coming years to achieve its aim of developing AGI. Maybe, OpenAI should also attract some oil money, or probably expand to the middle-east. Interestingly, Microsoft is already planning to do that. 

As of now, OpenAI is trying to attract enterprises in order to stay in business. It announced its inaugural developer’s conference, which is supposed to take place in San Francisco on November 6, 2023, where it is hoping developers from around the world will come up with new ideas and tools for ChatGPT and APIs.

Share
Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India