AIM logo Black
Search
Close this search box.

Google AI Overview’s Suggestion of Adding Glue to Pizza Says a Lot About Human Flaws

Not AI.

Share

Google AI Overview’s Suggestion of Adding Glue to Pizza Says a Lot About Human Flaws

Illustration by Nikhil Kumar

Google AI Overview’s latest reply to ‘cheese not sticking to pizza’ has taken the internet by storm, making users question the potential of its generative AI search experience. 

This is not the first time. During its initial lab testing phase, known as the Search Generative Experience (SGE), it asked users to ‘drink a couple of litres of light-coloured urine in order to pass kidney stones.’

Google also upset Indian IT Minister Rajeev Chandrasekhar after Gemini expressed a biased opinion against India’s Prime Minister Narendra Modi. 

The trend continues. A few months ago, the tech giant had to temporarily suspend image-generating features after Gemini inaccurately depicted people of colour in Nazi-era uniforms, showcasing historically inaccurate and insensitive images.

Who better to explain than German AI cognitive scientist Joscha Bach

Last month, at the AICamp Boston meetup, he discussed the implications of Google Gemini and how it reflects societal biases, influencing the system to give inaccurate results or outputs. 

He said Gemini, despite not being explicitly programmed with certain opinions, ended up exhibiting biased behaviour, such as altering images to promote diversity but, as mentioned earlier, inadvertently depicting Nazi soldiers as people of colour. 

He believes that this bias wasn’t hardcoded but inferred through the system’s interactions and prompts.

Bach said that Gemini’s behaviour reflects the social processes and prompts fed into it rather than being solely algorithmic. He said that the model developed opinions and biases based on the input it received, even generating arguments to support its stance on various issues like meat-eating or reproduction.

He highlighted the potential of such models for sociological study, as they possess a vast understanding of internet opinions. Instead of focusing solely on cultural conflicts, he suggested viewing these AI behaviours as mirrors of society, urging a deeper understanding of our societal condition.

Similarly, in ‘You Are to be Blamed for ChatGPT’s Flaws, AIM stressed that the cycle of misinformation is mostly driven by human inputs and interactions, not just AI capabilities.

Simply put, the responsibility for misinformation generated by AI largely falls on content creators, media platforms, and users rather than the technology itself. Essentially, human actors play a significant role in perpetuating misinformation, whether through its creation, dissemination, or failure to verify its accuracy. 

That explains why OpenAI has been busy partnering with media agencies. 

AI Hallucination as a Feature 

These types of AI behaviours lead to what you may call ‘AI hallucinations,’ which many AI experts, including Yann LeCun and alike, have said are an inherent feature of auto-regressive language models (LLMs). 

“That’s not a major problem if you use them as writing aids or for entertainment purposes. Making them factual and controllable will require a major redesign,” shared LeCun in February last year. Nothing much has changed since.  

Some, like OpenAI chief Sam Altman and Elon Musk, consider AI hallucinations creativity, and others believe hallucinations might be helpful in making new scientific discoveries. However, they aren’t a feature but a bug in most cases where providing a correct response is important.

During a chat at Dreamforce 2023 in San Francisco, Altman said AI hallucinations are a fundamental part of the “magic” of systems like ChatGPT, which users have grown fond of. 

Altman highlighted the value OpenAI sees in addressing the technical complexities of hallucinations, noting that they offer unique insights. “A crucial aspect often overlooked is the inherent creativity these systems possess through hallucinations,” he explained. 

Further, he said that while databases suffice for factual queries, AI’s capacity to generate novel ideas truly empowers users. “Our focus is on balancing this creativity with factual accuracy,” he added. 

He emphasised the importance of not restricting AI platforms to only producing content when entirely certain, arguing that such an approach would be shortsighted. 

“Opting for absolute certainty at all times would undermine the very essence of these systems,” Altman asserted. “While it’s tempting to impose strict guidelines, doing so would strip away the enchanting unpredictability that users find so captivating.”

The same goes for Grok, where Elon Musk is looking to include ‘Fun Mode,’ where users will be able to see a humorous take on the news. 

Solving AI Hallucinations 

This aside, however, hallucinations still pose a major problem and research is going into addressing them. A few experts believe a potential architectural solution, such as vector databases, particularly vector SQL, can reduce hallucinations.

LLMs often generate misinformation due to their statistical nature, but vector databases offer a way for LLMs to query human-written content for accurate responses. 

Vector SQL involves LLMs querying a database instead of solely relying on their training data, thereby reducing hallucinations. The process involves LLMs generating SQL queries for user natural language queries, which are then converted by a vector SQL engine. This method improves the efficiency, flexibility, and reliability of AI-generated content.

Despite existing similar methods, vector SQL presents a novel approach to mitigating hallucinations. For instance, Microsoft’s Bing Chat uses a system called Prometheus, which combines Bing’s search engine with OpenAI’s GPT models to reduce inaccuracies and provide linked citations for confidence in responses.

With advancements like vector SQL, the era of hallucination-free LLMs might be on the horizon, offering more reliable and accurate AI-generated content.

Further, advanced techniques like CoVe by Meta AI, knowledge graph integration, RAPTOR, conformal abstention, and reducing hallucination in structured outputs via RAG. as well as simply using more detailed prompts, are emerging to mitigate LLM hallucinations. 

Share
Picture of Vidyashree Srinivas

Vidyashree Srinivas

Vidyashree is enthusiastic about investigative journalism. Now trying to explore how AI solves for all.
Related Posts
CORPORATE TRAINING PROGRAMS ON GENERATIVE AI
Generative AI Skilling for Enterprises
Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.
Upcoming Large format Conference
June 28, 2024 | 📍 Bangalore, India
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Flagship Events

Rising 2024 | DE&I in Tech Summit
April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore
Data Engineering Summit 2024
May 30 and 31, 2024 | 📍 Bangalore, India
MachineCon USA 2024
26 July 2024 | 583 Park Avenue, New York
Cypher India 2024
September 25-27, 2024 | 📍Bangalore, India
Cypher USA 2024
Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA
MachineCon GCC Summit 2024
June 28 2024 | 📍Bangalore, India
discord-icon
AI Forum for India
Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.

Subscribe to Our Youtube channel