MITB Banner

NVIDIA’s Nemotron-4 15B Beats Mistral, Gemma, and Llama 2 in Reasoning

Qwen seems to be missing.

Share

NVIDIA Trying to Keep AI Chatbots’ Hallucinations ‘On Track’
Listen to this story

NVIDIA recently released a new language model Nemotron-4 15B which trained on a staggering 8 trillion text tokens. It comprises 15 billion parameters, with the ability to perform a variety of tasks in English, coding and multilingual languages.

The researchers noted that Nemotron performs better than other similarly-sized, decoder-only transformer models in four out of seven evaluation areas and competes closely with the top models in the remaining domains. 

Nemotron matches Qwen-14B at MMLU benchmarks and code but outperforms Gemma 7B, Mistral 7B and LLaMA-2 34B. Nemotron outperforms every other model in reasoning but falls short in maths against Qwen. Interestingly, Qwen is missing in the reasoning.

Nemotron-4 15B outperforms mGPT 13B and XGLM 7.5B in Multilingual Classification. It also does better than Palm-62B and Mistral 7B in generating multilingual text. 

NVIDIA’S Latest

Nemotron-4 15B is built using a basic setup that only decodes or generates text, focusing on the order of words. It has 32 layers to process information, a capacity to handle a lot of details at once (6144 units of data), and uses 48 different focus points to understand the context better. Its training utilised a mix of English, multilingual, and source-code data, focusing on quality and diversity to enhance model performance across different languages and programming languages.

The model’s training process employed 384 DGX H100 nodes. This extensive training allowed Nemotron-4 15B to achieve high accuracies in a broad range of tasks, including commonsense reasoning, maths, code, and multilingual evaluations, demonstrating its versatility and efficiency.

Nemotron-4 15B’s achievements are part of NVIDIA’s ongoing efforts in AI and model development.

Key offerings include the Megatron-LM series, optimised for tasks like text summarisation and question answering, and BERT-based models for understanding sentence context.

Although Nemotron-4 15B is not open source their previous iteration, Nemotron-3B, a family of models with 8 billion parameters available on Github. The Nemotron-3 Chat is fine tuned using supervised fine-tuning to produce accurate and informative responses to the prompt. 

Share
Picture of K L Krithika

K L Krithika

K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India