ChatGPT & Bing Chat are Having Conversation, Should You Be Worried?

If chatbots can simulate conversations, will they generate languages that humans cannot understand?
Listen to this story

The ever-evolving and never-ending human urge to have a conversational robot has led us to models like ChatGPT and Bard. These LLM-powered chatbots might be the closest to what humans have dreamed of. What’s more? Someone recently made ChatGPT and Bing AI have a conversation. Prompt after prompt, the chatbots learnt about each other, and are best pals now! 

Meanwhile, in 2017, a similar incident occurred when two Facebook’s chatty AI robots started talking to each other in their own language, and had to be shut down. The same year, Google claimed that its Translate tool had the capability to generate its own language. OpenAI too attests to that, saying, AI can indeed be encouraged to create its own language. 

All of these instances bring us to question if chatbots would really be able to generate their own language, something that humans are too dumb to understand? 

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

History of chatbots 

Chatbots have evolved a lot over the years and it might come as a surprise for many, but these chatbots have been around since the 1960s. Developed between 1964 to 1966 by Joseph Weizenbaum at MIT Computer Science & Artificial Intelligence Laboratory, Eliza chatbot was the first of its kind. It was designed specifically to highlight how superficial a conversation between a man and machines can be. 

The chatbot simulated a conversation between a patient and a psychiatrist by identifying keywords and “pattern matching”. This was the earliest example of an NLP (natural language processing) computer program. 

Download our Mobile App

By the 1990s, with the rise of the internet, chatbots started to become more sophisticated, incorporating NLP and ML techniques. This enabled them to understand and generate more human-like responses to user inputs. For example, Eliza-inspired ALICE (Artificial Linguistic Internet Computer Entity), a chatbot created in 1995 by Richard Wallace, was one of the first to use ML algorithms to generate responses.

Then came messaging apps resulting in a widespread use of chatbots for customer service and support. The most easily identifiable chatbots for our generation would be Apple’s Siri, launched in 2011. Though not a conversational bot per se, it is mostly understood as a chatbot which worked on a rule-based system to reply to users’ inputs. These chatbots, like Cortana, Google Assistant, and Alexa, were developed for personal assistance or controlling the connected systems through simple voice commands. Though there is a commonality between these and modern models like ChatGPT, they differ hugely in terms of technologies and algorithms. 

Largely, there are two main categories of chatbots: syntactic-based and semantic-based. Bots before Siri, which relied only on processing and generating results based on input, are single-turn. This means that they can have no context or conversational ability and only rely on the structure and grammar of the sentence. These fall under syntactic-based chatbots. Semantic chatbots like Siri and ChatGPT understand the context, and are multi-turn, thus giving an impression of human-like interaction. 

However, the point where Siri and ChatGPT differ is the algorithms. While both of them use AI/ML and NLP, ChatGPT uses the generative pretrained transformer (GPT) algorithm, whereas Siri uses recurrent neural networks or LSTM (Long short-term memory). This means creating layers to mimic something like memory. 

Simply put, assistants like Siri are meant to understand a human’s input, process it, and reply in a conversational manner. This is called NLU (Natural Language Understanding), a subset of NLP, which is combined with machine learning algorithms. On the other hand, ChatGPT cannot be called an assistant. It is trained on a large volume of text data and mimics human capabilities to generate text. As of now, LLM models like ChatGPT cannot interact with the world, but can be applicable for a wide variety of tasks, whereas personal assistants like Siri cannot be used for any other tasks. 

The other subset of NLP is NLG (Natural Language Generation), which as the name suggests are models that process structured or unstructured text data into comprehensible text. These are built to be much more flexible and capable of generating a wider range of outputs. 

Now with ChatGPT and Google Bard, LLMs have made the generation part even more advanced. Their language processing capabilities allow them to understand and respond to a much wider range of inputs and generate much more diverse and sophisticated outputs. These models are considered the epitome of AI right now. Recent AI developments, most importantly the transformer-based architectures have made it possible for machines to not just interpret language, but also generate human-like text. 

Cut to present 

In 2023, we have conversational models trying to be smarter than ever. Trained on huge datasets, these LLMs may be way more knowledgeable than a human being. For now, the only thing that lacks is common sense and rationality. 

OpenAI’s ChatGPT began a revolution, which Google joined in. And now, we have Amazon releasing its AI model, which is outperforming GPT-3.5 in human-like reasoning capabilities. Maybe LLMs will lead us to that long-standing AGI dream. After all, if chatbots can talk to each other and become friends, it’s only a matter of time before they start their own “language generation”, which has already happened before. 

But even then, LLM chatbots heading towards human-level AI still remain a far-fetched idea. Yann LeCun, chief of AI at Meta, suggests that LLMs that are auto-regressive and merely reactively predictive are an off-ramp for achieving intelligence. He proposed self-supervised learning transformers as a solution, within an AI system that can reason, plan, and learn models of the underlying reality. 

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Mohit Pandey
Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.