Listen to this story
|
Within a week of the release of Meta’s open-source LLM, LLaMA, we have an implementation of it based on Reinforcement Learning with Human Feedback (RLHF). ChatLLaMA, developed by Nebuly, claims to have a 15 times faster training process than ChatGPT, which is ideal for allowing developers to fine-tune and personalise ChatLLaMA assistant services.
Since the implementation is built on LLaMA, which is significantly smaller and faster than GPT-3, it enables faster inference and cost-effective ChatGPT like assistants. This is enabled by the built-in support for DeepSpeed ZERO within ChatLLaMA.
Click here to check out the repository.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
ChatGPT, the chatbot by OpenAI, is also built by implementing GPT-3.5 with RLHF. Since Meta’s LLaMA was not fine-tuned for instruction tasks, ChatLLaMA allows its implementation by bringing in RLHF.

The library supports all LLaMA model architectures ranging from 7 billion to 65 billion parameters. This will allow more flexibility and preferences for training time and inference performance. The library also supports addition of custom datasets for the fine-tuning process along with Meta’s original weights. Moreover, it includes a built-in support for generating your own dataset.
The developers are also calling out for open-source contributions from the community, since the library is in very early stages.
Nebuly-AI, has been releasing open-source plug & play modules for unleashing the optimisation power, and assisting breakthroughs in the AI community.
The developers have previously released Speedster, for reaching maximum inference on your hardware, Nos, for maximising performance of GPU resources in Kubernetes Cluster, OpenAlphaTensor, with custom generated matrix multiplication algorithm, Forward-Forward, for testing the Forward-Forward algorithm in PyTorch. They have several other projects in the pipeline, including a GPT Optimiser.
Recently, Colossal-AI also released an open-source, PyTorch-based implementation of ChatGPT, that requires less computing resources that includes all the stages of making a chatbot.