Listen to this story
|
Meta is planning to release Llama 3 in July, reported The Information, citing sources familiar with the matter. The largest version of Llama 3 could surpass 140 billion parameters, exceeding its predecessor Llama 2.
Meta aspires for Llama 3 to match GPT-4’s capabilities, including the ability to respond to image-based questions. However, the decision on making Llama 3 multimodal—handling both texts and images—is pending, awaiting the fine-tuning process. In contrast, OpenAI has recently introduced their text to video generation model, Sora.
One of the key objectives for Meta is to enhance Llama 3’s responsiveness to challenging queries, marking a delicate balance between creating engaging products and mitigating the risk of inappropriate or inaccurate responses. Google, of late, has found itself entangled in a series of challenges, particularly with its AI model Gemini being labeled as excessively woke.
To achieve this, Meta plans to appoint an internal figure in the coming weeks to oversee tone and safety training, aiming to make the model’s output more nuanced. Meta’s generative AI group, distinct from its Fundamental AI Research team, is driving the development of Llama.
As per insiders at Meta, researchers are tweaking Llama 3 to make it more interactive when users pose tricky questions. The focus is on offering context rather than outright dismissing challenging queries. The upcoming model aims to better understand words with multiple meanings.
For instance, Llama 3 might understand that a question about how to kill a vehicle’s engine means asking how to shut it off rather than end its life.
Llama holds a crucial place in Meta’s AI strategy, aiming to enhance advertising tools and elevate social media app appeal. Meta chief Mark Zuckerberg highlighted key priorities for the year, emphasising the launch of Llama 3 and ongoing efforts to improve the Meta AI assistant during recent investor discussions.