Indian Developers Top Hugging Face Leaderboard with GenZ 70B

The GenZ 70B open source model ranks No.6 among all open LLMs and sits atop the leaderboard for instruction-tuned LLMs on Hugging Face
Listen to this story

GenZ 70 B, an instruction fine-tuned model, which comes with a commercial licensing option, is shining on the top spot in Hugging Face’s leaderboard of instruction-tuned LLMs. It also ranks No.6 for open LLMs in all categories. This is the first time we are seeing such a development from India. 

Accubits Technologies, a full-service software development and technology consulting company, is an Indian company with a corporate office in the US. The company, in collaboration with Bud Ecosystem, has open-sourced their fifth large language model – GenZ 70B.

GenZ, an advanced LLM, is fine-tuned on Meta’s open-source Llama-2 70B parameter model. The model has undergone fine-tuning, primarily to improve its reasoning, role-playing, and writing abilities. The company chose Llama-2 because it is a SOTA pretrained model architecture compared to other commercial open-source LLMs. 

“GenZ 70B model uses RoPe positional embedding, which allows for context interpolations, implying that the model’s context lengths can be extended later, if required. It also comes with attention mechanisms like Ghost that provide better memory, computing, and alignment. Moreover, it is already pre-trained on 2 trillion tokens,” said Charush S Nair, CTO of Accubits Technologies in an exclusive interaction with AIM.  

Surpassing Other LLMs

In initial assessments, the model showcased superior performance. It achieved a score of 70.32 on the MMLU benchmark (Measuring Massive Multitask Language Understanding), surpassing LLama-2 70 B’s score of 69.83. Furthermore, GenZ 70B achieved an outstanding score of 7.34 on the MT (multi-turn) benchmark. 

“Even though numerous fine-tuned models are out there, most do not offer commercial licences. GenZ stands out mainly for two reasons: one, it offers a commercial licence, and two, it offers good performance,” said Nair. 

The models have been refined through supervised fine-tuning (SFT) technique, which was achieved after multiple experiments where SFT was the best option.  “Generally, PEFT (Parameter Efficient Fine-tuning) methods are used for fine-tuning LLMs. However, it does not work well for long-term multistage fine-tuning because the accuracy of the results usually drops by the number of stages and eventually leads to catastrophic forgetting & model drift. We have also noticed that PEFT methods impact the model’s generalisation capability more than supervised fine-tuning,” said Nair. 

GenZ models’ capability comparison with GPT3.5. Source: Accubits

“As robust reasoning capabilities are very important for an LLM model to be used for commercial applications, we primarily instruct-tuned the model for better reasoning, roleplay, and writing capabilities. Some of the primary use cases and business applications include business analysis, risk analysis,  project scoping, and conversational tools,” said Nair. He also believes that organisations can use GenZ 70B to address niche challenges and develop innovative solutions.  

The smaller quantization version of GenZ models make them accessible, enabling their use even on personal computers. There are three models of different parameter counts (7B, 13B and 70B) and quantizations of 32bit and 4-bit that are available for the open-source community. 

Limitations Remain

While the model offers versatility and higher capability when compared to other models, it comes with inherent limitations in its practical application. The model-maker has advised caution when considering its deployment for production purposes, and since GenZ 70B is based on extensive web data, similar to other LLMs, it may exhibit online biases and stereotypes. 

“We recommend users of GenZ to consider fine-tuning it for the specific set of tasks of interest,” said the CTO. Using precautions and guardrails while using it on production has been reiterated in the company blog too. 

Finding exact use cases for the open-sourced GenZ 70B model might still be a challenge. Considering how a number of big tech companies are releasing superior open-source models such as Meta’s Llama-2, Anthropic’s Claude-2, Falcon, Vicuna and others, a model such as GenZ can face hurdles when it comes to adoption in the highly competitive market. 

With GenZ, the company is out on a mission to build open-source foundational models with the knowledge and reasoning capabilities of GPT-4 which focuses on privacy and can be hosted on a laptop too. “The power of LLMs should not be exclusive but should be leveraged for the collective advancement of society. After all, technological progress reaches its full potential when it can be harnessed by all, not just by a privileged few,” said Nair.  

Download our Mobile App

Vandana Nair
As a rare breed of engineering, MBA, and journalism graduate, I bring a unique combination of technical know-how, business acumen, and storytelling skills to the table. My insatiable curiosity for all things startups, businesses, and AI technologies ensure that I'll always bring a fresh and insightful perspective to my reporting.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox