Along with its much-awaited GPT-4, OpenAI has open sourced a software framework called Evals—– to evaluate the performance of its AI models. This release comes after the release of its Multilingual Speech Recognition System Whisper in September last year.
Check out the GitHub repository here.
OpenAI says its, “staff actively review these evals when considering improvements to upcoming models.”
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Additionally, the Microsoft backed AI firm stated that the tools will enable individuals to identify deficiencies in their models and provide feedback to direct enhancements.
Read: Much Awaited Breakthrough in LLMs Has Finally Arrived
Download our Mobile App
Following the release of the ChatGPT and Whisper APIs last month, OpenAI initially stated that it would not utilise customer data to train its models. However, it has now opted for crowd-sourced methods to improve the resilience of its AI models.
It has followed the example of Meta’s Dynabench, which detects hate speech, analyses sentiment, and answers questions, among other things. Or ‘Break It, Build It’ platform developed by the University of Maryland’s CLIP Laboratory, which allows researchers to submit their models to users who are tasked with generating examples to overcome them.
Read: Hugging Face Makes OpenAI’s Worst Nightmare Come True
OpenAI is hoping, ‘Evals becomes a vehicle to share and crowdsource benchmarks, representing a maximally wide set of failure modes and difficult tasks.’
For example, OpenAI created a logic puzzles evaluation that contains 10 prompts where GPT-4 fails.
Evals has also included several notebooks implementing academic benchmarks and a few variations of integrating small subsets of CoQA(A Conversational Question Answering Challenge) as an example.
To incentivize this, OpenAI plans to grant GPT-4 access to those who contribute “high-quality” benchmarks.