Here is An Open-Source RLHF Implementation of LLaMA

The library supports all LLaMA model architectures ranging from 7 billion to 65 billion parameters.
Listen to this story

Within a week of the release of Meta’s open-source LLM, LLaMA, we have an implementation of it based on Reinforcement Learning with Human Feedback (RLHF). ChatLLaMA, developed by Nebuly, claims to have a 15 times faster training process than ChatGPT, which is ideal for allowing developers to fine-tune and personalise ChatLLaMA assistant services. 

Since the implementation is built on LLaMA, which is significantly smaller and faster than GPT-3, it enables faster inference and cost-effective ChatGPT like assistants. This is enabled by the built-in support for DeepSpeed ZERO within ChatLLaMA.

Click here to check out the repository.

ChatGPT, the chatbot by OpenAI, is also built by implementing GPT-3.5 with RLHF. Since Meta’s LLaMA was not fine-tuned for instruction tasks, ChatLLaMA allows its implementation by bringing in RLHF.

The library supports all LLaMA model architectures ranging from 7 billion to 65 billion parameters. This will allow more flexibility and preferences for training time and inference performance. The library also supports addition of custom datasets for the fine-tuning process along with Meta’s original weights. Moreover, it includes a built-in support for generating your own dataset.

The developers are also calling out for open-source contributions from the community, since the library is in very early stages. 

Nebuly-AI, has been releasing open-source plug & play modules for unleashing the optimisation power, and assisting breakthroughs in the AI community. 

The developers have previously released Speedster, for reaching maximum inference on your hardware, Nos, for maximising performance of GPU resources in Kubernetes Cluster, OpenAlphaTensor, with custom generated matrix multiplication algorithm, Forward-Forward, for testing the Forward-Forward algorithm in PyTorch. They have several other projects in the pipeline, including a GPT Optimiser. 

Recently, Colossal-AI also released an open-source, PyTorch-based implementation of ChatGPT, that requires less computing resources that includes all the stages of making a chatbot.

Download our Mobile App

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Bangalore

Future Ready | Lead the AI Era Summit

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

20th June | Bangalore

Women in Data Science (WiDS) by Intuit India

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox