Is Bard Abetting ChatGPT’s Suicide?

OpenAI is degrading ChatGPT’s performance to serve more users, but Bard is slowly improving itself to take on this giant

Share

How difficult is it to spell the word ‘lollipop’ backwards? For a human, it’s a cakewalk. But for our omniscient ChatGPT, it’s an impossible task. Ha!

The reason why ChatGPT is unable to spell words backwards is because it is trained on a token basis. While plugins can help it overcome this shortcoming, OpenAI’s plate is currently full. Plagued by scaling issues, rate limiting, and complaints from users all over the internet, it seems ChatGPT is slowly getting worse. 

Register for Future Ready Conference >>

However, Google’s Bard, which was mocked for its late entry and perceived incompetence, has successfully solved this third-grade English problem here. Google has recently updated it with a technique called ‘implicit code execution’. With this upgrade, Bard can detect when users are giving it computational prompts and run code in the background to accurately answer the question. While Google is hard at work improving Bard’s capabilities, OpenAI seems to be falling behind. 

Upgrades to Bard

Bard’s implicit code execution is just a fancy way of saying that the chatbot can intelligently understand when to use a Python backend to answer complex questions. Instead of being prompted by the user to solve the problem using code, Bard will pick a relevant Python library and answer the users’ query. This method allows Google to buff up Bard’s logic and reasoning skills, putting it on par with the LLM’s natural aptitude for language and creative tasks. Google states that this approach is inspired by Nobel laureate Daniel Kahneman’s approach to split human thinking into two systems. 

This theory, described in the book ‘Thinking, Fast and Slow’, separates human thought processes into System 1 and System 2 methods. System 1 is fast thinking, such as a talented musician improvising a song on the spot. System 2 is slow and deliberate thinking, such as the process of learning a musical instrument. Kahneman posits that the human brain uses both these methods to process information. 

Google has adopted this method to make Bard more capable, expanding its capabilities from only System 1 thinking to a combination of the both. By introducing code into the equation, a classic example of System 2 thinking, Google aims to bring together the best of both worlds. 

This concept is nothing new to LLMs. Stephen Wolfram, one of the creators of Wolfram Alpha, has spoken extensively in the past about the benefits of combining the Wolfram language with ChatGPT. He also has a strong belief that LLMs need to have a backend. In an exclusive interview with AIM, he stated, “It’s interesting when people say ‘We’ll just get the LLM to do everything’ but that isn’t going to work because natural language is not precise.”

While Wolfram has worked closely with OpenAI to get the ChatGPT Wolfram plugin up and running, OpenAI has dropped the ball. ChatGPT Plugins are available only to ChatGPT Plus subscribers, with OpenAI slowly neglecting its non-paying customers tier and neutering free ChatGPT’s capabilities. 

Downgrades to ChatGPT

Since February, ChatGPT power users have been complaining that the chatbot is slower than usual and does not give comprehensive answers like before. User ‘lolcol1’ on Reddit was one of the first to report this phenomenon, stating, “So here I am, using ChatGPT every day since December. The more it goes on the more ChatGPT seems like getting dumber and dumber…and dumber. And I’m not talking about the replies it provides but even the way it tries to keep up with the dialogue.”

Other users also expressed the similar concerns. Many stated that it is slower, with most getting frustrated with how slowly the chatbot gave them answers. This slowdown can be traced back to ChatGPT’s February 13th update, wherein the company stated that they had “updated performance” to “serve more users”. Interestingly, this downgrade, disguised as an update, was only for free users, with the Plus users getting an even faster version of ChatGPT. 

This is part of OpenAI’s bigger strategy to drive down ‘the cost of intelligence’. According to Sam Altman in a recent interview (which has since been removed ‘at the request of OpenAI’), the company’s top priority is to make GPT-4 cheaper and faster, reducing the cost of the APIs over time.

The pressure behind this push is understandable, considering OpenAI’s current shortage of compute resources. Even though Microsoft built an exclusive supercomputer for OpenAI in 2020 and extended its capabilities with a bigger AI supercomputer last year, OpenAI is still struggling from a GPU crunch. It maybe because of ChatGPT’s 100 million users, or because the algorithms are not optimised to run at scale. 

Whatever the reason, OpenAI is being handicapped by a lack of GPU processing using power. According to Altman, who acknowledged customer complaints about the reliability and speed of the APIs, OpenAI’s GPU crunch is delaying a lot of its short-term plans. Additional products and features, like the much awaited GPT-5, have also taken a backseat in favour of making GPT-4 more efficient to run. 

However, in the process of doing so, it seems that OpenAI has alienated some of its most dedicated users. By cutting down the quality of ChatGPT’s responses, OpenAI might have gotten some breathing room for compute, but are bleeding users. What’s more, the algorithm even has a cap of 25 messages every 3 hours, according to this user on the OpenAI subreddit. 

On the other hand, Google is sitting pretty with Bard. Not only did the company release it on a waitlist basis to gauge the user interest towards the product, they have also scaled it effectively with their slow and measured approach. Even if users balloon like they did for ChatGPT, Bard can effectively scale with the TPUv4 supercomputer created by Google. 

With the steady upgrades being offered for Bard and plans to integrate into Google Search, OpenAI’s first-mover advantage may be slowly eroding away. With the current exodus of users, if OpenAI does not solve their GPU issues soon, they might not need to.

Share
Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India