AIM Banners_978 x 90

What Makes Neural Networks Hallucinate?

Researchers at the University of California, Berkeley and the Boston University have dug deep into the process of how deep neural networks generate ‘hallucinations’ while trying to create captions for images. By hallucinating we mean that neural networks often give strange outputs and predict weird results. Hence the researchers are now working on preventing this kind of hallucination that would pave the way to build better and more robust artificial intelligence systems. There has been a great improvement in image processing performance as well as image captioning methods. The main drawback of these techniques is that they only measure similarity with the given training data and that there are no mechanisms to insert more context. That is why the researchers have proposed a new image
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Abhijeet Katte
Abhijeet Katte
As a thorough data geek, most of Abhijeet's day is spent in building and writing about intelligent systems. He also has deep interests in philosophy, economics and literature.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed