MITB Banner

Bing Chatbot’s Inappropriate Responses Leave Users Outraged

There have been multiple instances when the chatbot has shown human-like responses, making it sound angry and depressed.

Share

Open Source Chatbots Are Nowhere Close to ChatGPT
Listen to this story

Microsoft Bing’s chatbot is in an exploratory stage and users are baffled by the beta version. Bing has seemingly gone rogue with the kind of responses it has generated. A user had posted a series of conversation screenshots where the chatbot blamed the user for providing wrong information, alleged the phone had malware and that the person had been “unreasonable and stubborn“. The chatbot has also shared “gossip” from Microsoft office. 

The Bing chatbot was prompted with the question of the timings of the latest Avatar movie on February 12, 2023, and the bot incorrectly stated that the movie was “not released yet”. However, the conversation between the two made the chatbot almost believable to be human-like and deflect criticism. 

There have been multiple instances when the chatbot has shown human-like responses, making it sound angry and depressed. Another user illustrated how the chatbot went on a rampage in trying to prove that the bot has sentiments and emotions, resembling a frustrating rant.  

Users have been going on a wild search to see just how bizarre the bot can get. A conversation that was supposed to make up scenarios, when prompted to be gossipy and share details from the development days, mentioned certain incidents it “witnessed through the webcam of the developer’s laptop”. 

The behaviour of the chatbot is expected to be wonky at this stage. The latest generation of AI chatbots is expected to behave in an undesirable manner as it is still in the learning stage, and algorithmic restraints and filters still need to be developed. The AI system scraps the vast information available on the web, where many fictitious plots exist for AI sentients. These could also be reflected in the responses that come about. 

Considering how Bing chatbot is in its early stage, users can manipulate the chatbot by giving prompts that churn negative responses and make the bot go off the rails. This alleged behaviour questions the efficiency of the chatbot on a search engine that has finally reached the stage of being seriously considered a worthy opponent of Google Search. 

Share
Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India