NVIDIA Open-Sources Guardrails for Hallucinating AI Chatbots

These guardrails ensure that the organisations building and deploying LLMs for a range of different functions can stay on track.
Listen to this story

The great powers of Generative AI carry great risks along with them. The AI chatbots hallucinate, often veer off topic and tend to scrape through user data. The danger is that companies rushing to integrate these tools within their system can potentially overlook these massive risks. NVIDIA may have a solution. 

Yesterday, the Jensen Huang-led company released a new open-source framework called NeMo Guardrails to help resolve this problem. These guardrails ensure that the organisations building and deploying LLMs for a range of different functions can stay on track. 

There are three types of guardrails that the project has – Topical controls to prevent applications from responding to sensitive questions, Safety controls to ensure accurate information from credible sources and security controls to restrict them from connecting with vulnerable external third-party applications. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Jonathan Cohen, VP of Applied Research at the company explained how the guardrails could be implemented. “While we have been working on the Guardrails system for years, a year ago we found this system would work well with OpenAI’s GPT models,” he stated. The blog posted on the guardrails stated that it works on top of all major LLMs like GPT3 or Google T5 or even AI image generation models like Stable Diffusion 1.5 and Imagen

Considering they are open-source, NeMo Guardrails can work with all the tools used by enterprise application developers. For instance, it can run on top of the open-source toolkit, LangChain that developers have been working on for third-party applications. 

Harrison Chase, the creator of the LangChain toolkit stated, “Users can easily add NeMo Guardrails to LangChain workflows to quickly put safe boundaries around their AI-powered apps.” 

Interestingly, the guardrails themselves use the LLM to check itself much like the SelfCheckGPT technique. Cohen admitted that while using the guardrails was “relatively inexpensive” in terms of the compute, there was space to optimise the controls but it still was

The guardrails are built on CoLang, a natural language modelling language which provides a readable and extensible interface for users to better control the behaviour of their AI bots. 

NVIDIA has incorporated the guardrails into the NVIDIA NeMo framework which is already open-sourced on GitHub. In addition, NeMo Guardrails will be included in the AI Foundations service which offers several pre-trained models and frameworks for companies to rent out.

Poulomi Chatterjee
Poulomi is a Technology Journalist with Analytics India Magazine. Her fascination with tech and eagerness to dive into new areas led her to the dynamic world of AI and data analytics.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox