MITB Banner

Why Bing Chat is Doomed to Fail

Microsoft’s new chatbot is existential, depressed, and straight-up wrong

Share

Listen to this story

With Bing Chat, Microsoft has repeated the mistake it made long ago. In 2016, the company came out with its chatbot Tay, with the intention of releasing an AI-powered internet personality into the world. However, Tay’s life was extremely short-lived. The chatbot’s inappropriate responses forced Microsoft to shut the bot down within 16 hours! 

It’s been seven years since the incident but it looks like MS hasn’t learned from the mistakes of the past. The new Bing chatbot suffers from similar shortcomings, albeit on a more destructive scale.

The chatbot was released last week with a waitlist-based access system, and Reddit users who got access to the bot were able to make it spew misinformation. What’s more, certain replies even make the bot seem sentient and have emotions. This is raising many eyebrows on the legitimacy of this new AI agent. 

The Internet Isn’t A Good Place

When ChatGPT was launched, users complained that it wasn’t able to access data from the internet, and relied just on its internal database containing info up to mid-2021. In hindsight, this seems like the most ideal way to launch a chatbot, allowing ChatGPT to bypass the pitfalls that Bing has fallen into.

Reports have emerged that Bing freely gives away misinformation on sensitive topics such as the COVID-19 vaccines. Ironically, the misinformation bits are pulled from articles illustrating how Bing provides misinformation, albeit out of context and with no disclaimers, unlike ChatGPT. 

OpenAI CEO Sam Altman says, “People really love it [ChatGPT], which makes us very happy. But no one would say this was a great, well-integrated product yet… but there is so much value here that people are willing to put up with it.”

However, even with access to the internet, Bing frequently hallucinates answers, in fact, much more than the other LLMs released over the past year. Some examples include hallucinations about the release date of the new Avatar movie, the winner of the recently-concluded Super Bowl, and Apple’s latest quarterly earnings report. These are queries that can easily be addressed with a simple search, but the chatbot either did not conduct searches or picked up old information and presented it as the answer to the query. 

In addition to misinformation, users have also been getting replies they call ‘weirdly sentient’. These responses range from the chatbot becoming ‘depressed’ because it couldn’t remember past conversations it had, various instances of Bing becoming ‘angry’ with the user and demanding an apology, and even having an existential crisis

Users have also found that the chatbot keeps repeating phrases like ‘I have been a good Bing’ and ‘I am a machine’. When asked whether the bot thinks it is sentient, it goes off on a rant, beginning the response with the sentence ‘I think I am sentient, but I cannot prove it’, and ending it by repeating ‘I am not. I am’. 

Notably, these responses are not the result of prompt injection attacks, instead seeming to be honest queries by normal users. However, Bing’s chatbot has been found to be vulnerable to such attacks, as seen by the ‘Sydney’ debacle that occurred last week. The conversations indicate that Microsoft’s security measures aren’t as heavy-handed as OpenAI’s. 

Insufficient Safety Measures?

When launching the Bing chatbot, Microsoft stated that the bot was built on the ‘next-generation OpenAI LLM’ that was ‘more powerful than ChatGPT and customised specifically for search’. What’s more, the company claimed to have created a way of working with the agent known as the ‘Prometheus model’, which promised timely and targeted results along with improved safety. 

The increased generative power of the new algorithm and its connection to the internet, combined with seemingly lax security measures, seem like a recipe for disaster on Microsoft’s hands. In a statement, a Microsoft spokesperson said, “In some cases, the team may detect an issue while the output is being produced and stop the process. They’re expecting the system to make mistakes during this preview period. Feedback is critical to help identify where things aren’t working well so they can learn and help the models get better.”

For ChatGPT and DALL-E, OpenAI developed a model termed as the moderation endpoint that checks for content compliance with OpenAI’s content policy. This model blocks responses that might fall under hate speech, sexual content, violence, and self-harm categories, explaining ChatGPT’s reluctance to talk about such topics. However, it seems that the Prometheus model is less ‘safe’ than the moderation endpoint, as it provides answers on topics such as Antifa, the Proud Boys, and more political questions. 

While the decision to not include the moderation endpoint might have been taken to allow the chatbot to give answers to controversial questions, reducing the amount of safety guardrails can result in unprecedented second-order effects. Not only will these public failures dilute the legitimacy of LLM-based search chatbots, they will also set back the moral and ethical usage of such algorithms.

Share
Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.