Why Meta Took Down its ‘Hallucinating’ AI Model Galactica?

“The reality is that large language models like GPT-3 and Galactica are like bulls in a china shop, powerful but reckless”
On Wednesday, MetaAI and Papers with Code announced the release of Galactica, an open-source large language model trained on scientific knowledge, with 120 billion parameters. However, just days after its launch, Meta took Galactica down. Interestingly, every result generated by Galactica came with the warning- Outputs may be unreliable. Language Models are prone to hallucinate text. “Galactica is trained on a large and curated corpus of humanity’s scientific knowledge. This includes over 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias and more,” the paper said. Galactica was designed to tackle the issue of information overload when accessing scientific information through search engines, where there is no proper organisation of scientific knowledge. However, when members of the community started using the all new AI model by Meta, many of them found the results to be suspicious. In fact, many took
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Pritam Bordoloi
Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed