MITB Banner

Meta’s Breakthrough Language Model on Par with GPT-4 and Bard in Performance

One of the notable aspects of LIMA’s performance is its ability to generalise well to unseen tasks that were not part of its training data.

Share

Listen to this story

Researchers for Meta AI, alongside Carnegie Mellon University, University of Southern California and Tel Aviv University, today unveiled LIMA, a 65 billion parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinsurance learning or human preference modelling. 

Register >>

Check out the research paper: LIMA: Less Is More for Alignment

Meta’s AI chief Yann LeCun said that this is on par with GPT-4 and Bard in terms of performance. 

The researchers said that LIMA has shown strong capabilities in learning specific response formats with minimal training examples. They said that it can effectively handle complex queries, ranging from planning travel itineraries to speculating about alternate history. 

Interestingly, one of the notable aspects of this new language model’s performance is its ability to generalise well to unseen tasks that were not part of its training data. In other words, LIMA can apply its learned knowledge to new and unfamiliar tasks, demonstrating a level of flexibility and adaptability, shared by the researchers. 

In comparison to human-controlled responses, specifically GPT-4, Bard, and DaVinci-003 (trained with human feedback), the responses generated by LIMA were quite impressive. For instance, in 43% of cases, responses from LIMA were either equivalent to or preferred over GPT-4. In the case of Bard, LIMA was preferred in 58% of cases. In the case of DaVinci-003, the preference rose to almost 65%.

It is interesting to note that with only limited instruction tuning data, models like LIMA can generate high-quality output. 

A few weeks back, Meta also released MEGABYTE, a scalable architecture for modelling long sequences. This new technique has outperformed existing byte-level models across a range of tasks and modalities, allowing large models of sequences of over 1 million tokens.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.