Advertisement

GPT-3 Is Quietly Damaging Google Search

While Google lets users make their own choice in finding answers, GPT-3 draws information from multiple sources to answer questions in natural language
Listen to this story

Machine learning systems have now excelled at tasks they are trained for by using a combination of large datasets and high-capacity models. They are capable of performing a variety of functions, from completing a code to generating recipes. Perhaps the most popular one is the generation of novel text – a content apocalypse – that writes no differently than a human. 

In 2018, the BERT model (Bidirectional Encoder Representations from Transformers) sparked discussion around how ML models were learning to read and speak. Today, LLMs or logic learning machines are rapidly developing and mastering a wide range of applications. 

In a generation of text-to-anything with incredible AI-ML models, it’s important to remember that more than understanding the language, the systems are fine-tuned to make it appear like they do. Speaking of the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. 

Race of parameters 

Parameters are crucial to machine learning algorithms – a part of the model trained using historical data. OpenAI’s GPT-3 (Generative Pre-trained Transformer 3) – trained at 175 billion parameters, is an autoregressive language model that uses deep learning to produce human-like text. According to OpenAI, the model can be applied “to any language task, including semantic search, summarization, sentiment analysis, content generation, translation, with only a few examples or by specifying your task in English.”

To counter this, a trio of researchers from Google Brain unveiled the next big thing in AI language models – a massive one trillion-parameter transformer system – with the help of a “sparsely activated” Switch Transformer. Google says, “Switch Transformers are scalable and effective natural language learners. We find that these models excel across a diverse set of natural language tasks and in different training regimes, including pre-training, fine-tuning and multi-task training.”

At this point, it’s unclear what the company intends to do with the techniques described in the paper. There’s indeed more to this than just one-upping OpenAI, but the exact use of the new system is a bit muddy.

Big leap in AI

The way we search online hasn’t changed in decades but researchers now want to make the experience similar to an interaction with a human expert. In 2021, Google researchers published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model—a future version of BERT or GPT-3. 

The idea was that instead of searching for information from a variety of web pages, users would ask questions and have a language model trained on those pages answer them directly. This approach not only changes how search engines work, but also how we interact with them. 

While Google lets users make their own choice in finding answers, GPT-3 draws information from multiple sources to answer questions in natural language. The issue lies in not keeping track of those sources, and providing no evidence for its answers. 

One of the researchers, named Donald Metzler, says that even the best search engines today still respond with documents that include the information asked for, not with the information itself. Search engines respond to queries with answers drawn from multiple sources at once. “It’s as if you asked your doctor for advice and received a list of articles to read instead of a straight answer.”

Researchers claim that the solution is to build and train future GPT-3s and BERTs to retain records of where the words come from. They state that no such models are able to do this yet, but it is possible in principle, and there is early work in that direction. “If it works, it would transform our search experience,” said the researchers.

Who can answer better?

It is evident that OpenAI’s GPT-3 challenges Google’s natural language processing (NLP) and the massive computing power of ML. Similar to cloud services like AWS and Microsoft Azure, Google’s BERT too gives us access to computing power on demand. While the two NLP’s share similar architecture, the large set of parameters leads to GPT-3 becoming 470 times bigger in size. 

Experts like David Weekly E have reported the fluency with which GPT-3 generates answers, which is much better than Google. 

Source: Twitter

Weekly said that he is fond of GPT-3’s ability to tackle multiple questions in ways that search engines don’t handle yet. He searches for ‘Why do we sleep? Why do we dream?’ on Google to gain the following results: 

Image

When used in GPT-3, the results were much simpler and clearer to understand in layman terms. 

Indeed, GPT-3 gives direct answers whereas Google recommends an inline answer to the question. It is evident that the answers produced in GPT-3 are well-nuanced, adding a philosophical touch, when required. However, Weekly states that GPT-3’s overconfidence can also be problematic. 

Source: Twitter

However, users found no real difference in answers generated by a Google search. 

Source: Twitter

The thread sparked more discussions:

Source: Twitter 

“Google has been SEO-bombed by many low-quality sources that optimize for pageviews. Maybe we’ll see LLMO: Large Language Model Optimization in the form of dataset poisoning to boost certain LLM responses,” a user tweeted.


GPT-3 is practically the most sophisticated NLP and NLG model trained on internet data, and which can produce high quality text output that is as good as a human written text. The article argues that GPT-3 will be able to provide a richer and more useful search experience, and thus challenge the existing search engines.

The key difference is that GPT-3 can give an output to a query as a summarised answer, almost like a human would, compared to a search engine output, which simply gives a list of most relevant links where the user can go and find the required information.

While GPT-3 is extremely large & powerful and can potentially have a lot of interesting use-cases in the NLG area, it still has limitations and risks to become a reliable search tool. I believe that it might still have some merit for factual and non-ambiguous search requirements. But, since GPT-3 potentially suffers from algorithmic bias, its inability to distinguish facts from fiction and un-explainable nature of AI algorithm, it will have its set of challenges in evolving into a full-scale search engine in its current avatar.


Download our Mobile App

Bhuvana Kamath
I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR