MITB Banner

Cerebras Wants What NVIDIA Has

While OpenAI apparently utilised 10,000 NVIDIA GPUs to train ChatGPT, Cerebras claims to have trained their models to the highest accuracy for a given compute budget.

Share

Listen to this story

Recently, Cerebras systems released a series of open source GPT-based large language models (LLMs) for the research community. The Silicon Valley-based firm has trained all its models using 16 CS-2 systems in their Andromeda AI supercomputer with 111 million, 256 million, 590 million, 1.3 billion, 2.7 billion, 6.7 billion, and 13 billion parameters.

What’s interesting is, Cerebras boasts of being the first company to utilise AI systems that do not rely on GPUs to train LLMs with a capacity of up to 13 billion parameters. The company is also sharing the models, weights, and training recipe under the industry standard Apache 2.0 licence. 

This is also the first time that the company is branching out into the generative AI space, wanting to claim the piece of the pie. More than anything, the chip startup is trying to follow the footsteps of NVIDIA, as it explores AIan uncharted territory for the firm established in 2015

But Cerebras has not only taken a page out of NVIDIA’s book, one can argue that it is refining it and its timing is mere coincidence. 

Cerebras started making waves as they foresaw the generative AI boom with large language models such as Microsoft NLG, OpenAI’s GPT-4, NVIDIA’s Megatron, and BAAI’s Wu Dao 2.0 among others. In 2021, they unveiled the world’s first multi-million core AI cluster architecture—-which could handle neural networks with up to 120 trillion parameters. 

Later, in November of 2022, Cerebras introduced one of the biggest AI supercomputers—Andromeda.

NVIDIA, really? 

Let’s take a look back. NVIDIA started as a manufacturer of GPUs for gaming and professional graphics applications. However, over the years, they evolved and expanded their product offerings, particularly in the field of AI and machine learning.

One of the key factors that enabled NVIDIA’s shift from a chip manufacturer to a foundational model provider was the development of their CUDA platform—parallel computing platform and programming model that allows software developers to use NVIDIA GPUs for general-purpose computing tasks, such as scientific computing, machine learning, and AI. This enabled NVIDIA to tap into new markets beyond gaming and graphics and establish themselves as a key player in the AI and ML space.

NVIDIA also invested heavily in developing hardware for deep learning, such as their Tesla GPUs and the Tensor Cores—which enabled more efficient and faster processing of deep learning algorithms, thus making it easier and more accessible for researchers and developers to create AI and ML models.

As large language models such as ChatGPT and DALL-E 2 launched generative AI into public consciousness, the buzz around generative AI soared to unparalleled levels in 2023. As a result, chips that support AI at scale have become crucial now more than ever and NVIDIA took over 88% of the GPU market, research indicates. 

Consequently, a lot of users consider NVIDIA as the primary beneficiary of the flourishing generative AI domain. 

However, Cerebras is directly taking on NVIDIA:

“While many companies have promised alternatives to NVIDIA GPUs, none have demonstrated both the ability to train large-scale models and the willingness to open source the results with permissive licenses,” the release read.

While OpenAI apparently utilised 10,000 NVIDIA GPUs to train ChatGPT, Cerebras claims to have trained their models to the highest accuracy for a given compute budget.

“These models are trained to the highest accuracy for a given compute budget (i.e., training efficient using the Chinchilla recipe) so they have lower training time, lower training cost, and use less energy than any existing public models.”

Cerebras also maintains that they completed the training in record time, cutting down the training time from typically multiple months to just under a few weeks. The team credited the speed of the CS-2 systems that make up Andromeda and their unique weight streaming architecture to easily distribute tasks over a large amount of compute.

Founder and CEO of Cerebras, Andrew Feldman, also talked about the firms’ open source efforts and how it has been welcomed by the community at large. He further added, “There is a big movement to close what has been open sourced in AI. . . It’s not surprising as there’s now huge money in it”. 

“The excitement in the community, the progress we’ve made, has been in large part because it’s been so open”, Feldman added.

Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.