MITB Banner

Huawei Researchers Develop LLM With 1.085 Trillion Parameters

It uses 329 billion tokens in more than 40 natural and programming languages.

Share

NVIDIA, Apple Have Got a Real Competitor Now
Listen to this story

A group of Huawei researchers developed a system that trained a language model—PanGu-Σ under the framework of MindSpore 5 on a cluster of Ascend 910 AI processors with 329 billion tokens over 100 days and launched it towards the second half of March.

PanGu-Σ’s built-in parameters are expanded using Random Routed Experts and the Transformer decoder architecture from PanGu-α‘s Random Routed Experts.

It is simple to extract sub-models using RRE design from the PanGu-Σ for a variety of downstream applications, including conversation, translation, code production, and interpreting natural language in general.

According to the research paper—in total, the training throughout is 6.3 times faster than it was for the model with the MoE architecture but the same hyper-parameters. The sub-modal of PanGu-Σ in the Chinese domain significantly outperforms the previous SOTA models, including PanGu-α- with 13 billion parameters and ERNIE 3.0 Titan with 260 billion parameters over 16 downstream tasks in six categories in the zero-shot setting without any multitask finetuning or instruction tuning. It uses 329 billion tokens in more than 40 natural and programming languages.

Huawei gathere datasets in 40 domains, with a significant amount of data in four key domains: Chinese, English, Bilingual (Chinese and English), and code, to further illustrate the PanGu-Σ’s ability model’s to learn effectively and independently from many domains.

The research paper asserts that PanGu-Σ has successfully produced state-of-the-art results in a variety of downstream tasks like few-shot NLU, open-domain discussion, question answering, machine translation, and code creation by expanding and continuously training from PanGu-α using 329B tokens. 

Share
Picture of Shyam Nandan Upadhyay

Shyam Nandan Upadhyay

Shyam is a tech journalist with expertise in policy and politics, and exhibits a fervent interest in scrutinising the convergence of AI and analytics in society. In his leisure time, he indulges in anime binges and mountain hikes.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.