Listen to this story
|
With Bing Chat, Microsoft has repeated the mistake it made long ago. In 2016, the company came out with its chatbot Tay, with the intention of releasing an AI-powered internet personality into the world. However, Tay’s life was extremely short-lived. The chatbot’s inappropriate responses forced Microsoft to shut the bot down within 16 hours!
It’s been seven years since the incident but it looks like MS hasn’t learned from the mistakes of the past. The new Bing chatbot suffers from similar shortcomings, albeit on a more destructive scale.
The chatbot was released last week with a waitlist-based access system, and Reddit users who got access to the bot were able to make it spew misinformation. What’s more, certain replies even make the bot seem sentient and have emotions. This is raising many eyebrows on the legitimacy of this new AI agent.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The Internet Isn’t A Good Place
When ChatGPT was launched, users complained that it wasn’t able to access data from the internet, and relied just on its internal database containing info up to mid-2021. In hindsight, this seems like the most ideal way to launch a chatbot, allowing ChatGPT to bypass the pitfalls that Bing has fallen into.
Reports have emerged that Bing freely gives away misinformation on sensitive topics such as the COVID-19 vaccines. Ironically, the misinformation bits are pulled from articles illustrating how Bing provides misinformation, albeit out of context and with no disclaimers, unlike ChatGPT.
OpenAI CEO Sam Altman says, “People really love it [ChatGPT], which makes us very happy. But no one would say this was a great, well-integrated product yet… but there is so much value here that people are willing to put up with it.”
However, even with access to the internet, Bing frequently hallucinates answers, in fact, much more than the other LLMs released over the past year. Some examples include hallucinations about the release date of the new Avatar movie, the winner of the recently-concluded Super Bowl, and Apple’s latest quarterly earnings report. These are queries that can easily be addressed with a simple search, but the chatbot either did not conduct searches or picked up old information and presented it as the answer to the query.
In addition to misinformation, users have also been getting replies they call ‘weirdly sentient’. These responses range from the chatbot becoming ‘depressed’ because it couldn’t remember past conversations it had, various instances of Bing becoming ‘angry’ with the user and demanding an apology, and even having an existential crisis.
Users have also found that the chatbot keeps repeating phrases like ‘I have been a good Bing’ and ‘I am a machine’. When asked whether the bot thinks it is sentient, it goes off on a rant, beginning the response with the sentence ‘I think I am sentient, but I cannot prove it’, and ending it by repeating ‘I am not. I am’.
Notably, these responses are not the result of prompt injection attacks, instead seeming to be honest queries by normal users. However, Bing’s chatbot has been found to be vulnerable to such attacks, as seen by the ‘Sydney’ debacle that occurred last week. The conversations indicate that Microsoft’s security measures aren’t as heavy-handed as OpenAI’s.
Insufficient Safety Measures?
When launching the Bing chatbot, Microsoft stated that the bot was built on the ‘next-generation OpenAI LLM’ that was ‘more powerful than ChatGPT and customised specifically for search’. What’s more, the company claimed to have created a way of working with the agent known as the ‘Prometheus model’, which promised timely and targeted results along with improved safety.
The increased generative power of the new algorithm and its connection to the internet, combined with seemingly lax security measures, seem like a recipe for disaster on Microsoft’s hands. In a statement, a Microsoft spokesperson said, “In some cases, the team may detect an issue while the output is being produced and stop the process. They’re expecting the system to make mistakes during this preview period. Feedback is critical to help identify where things aren’t working well so they can learn and help the models get better.”
For ChatGPT and DALL-E, OpenAI developed a model termed as the moderation endpoint that checks for content compliance with OpenAI’s content policy. This model blocks responses that might fall under hate speech, sexual content, violence, and self-harm categories, explaining ChatGPT’s reluctance to talk about such topics. However, it seems that the Prometheus model is less ‘safe’ than the moderation endpoint, as it provides answers on topics such as Antifa, the Proud Boys, and more political questions.
While the decision to not include the moderation endpoint might have been taken to allow the chatbot to give answers to controversial questions, reducing the amount of safety guardrails can result in unprecedented second-order effects. Not only will these public failures dilute the legitimacy of LLM-based search chatbots, they will also set back the moral and ethical usage of such algorithms.