MITB Banner

GPT-3 Is Quietly Damaging Google Search

While Google lets users make their own choice in finding answers, GPT-3 draws information from multiple sources to answer questions in natural language

Share

Listen to this story

Machine learning systems have now excelled at tasks they are trained for by using a combination of large datasets and high-capacity models. They are capable of performing a variety of functions, from completing a code to generating recipes. Perhaps the most popular one is the generation of novel text – a content apocalypse – that writes no differently than a human. 

In 2018, the BERT model (Bidirectional Encoder Representations from Transformers) sparked discussion around how ML models were learning to read and speak. Today, LLMs or logic learning machines are rapidly developing and mastering a wide range of applications. 

In a generation of text-to-anything with incredible AI-ML models, it’s important to remember that more than understanding the language, the systems are fine-tuned to make it appear like they do. Speaking of the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. 

Race of parameters 

Parameters are crucial to machine learning algorithms – a part of the model trained using historical data. OpenAI’s GPT-3 (Generative Pre-trained Transformer 3) – trained at 175 billion parameters, is an autoregressive language model that uses deep learning to produce human-like text. According to OpenAI, the model can be applied “to any language task, including semantic search, summarization, sentiment analysis, content generation, translation, with only a few examples or by specifying your task in English.”

To counter this, a trio of researchers from Google Brain unveiled the next big thing in AI language models – a massive one trillion-parameter transformer system – with the help of a “sparsely activated” Switch Transformer. Google says, “Switch Transformers are scalable and effective natural language learners. We find that these models excel across a diverse set of natural language tasks and in different training regimes, including pre-training, fine-tuning and multi-task training.”

At this point, it’s unclear what the company intends to do with the techniques described in the paper. There’s indeed more to this than just one-upping OpenAI, but the exact use of the new system is a bit muddy.

Big leap in AI

The way we search online hasn’t changed in decades but researchers now want to make the experience similar to an interaction with a human expert. In 2021, Google researchers published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model—a future version of BERT or GPT-3. 

The idea was that instead of searching for information from a variety of web pages, users would ask questions and have a language model trained on those pages answer them directly. This approach not only changes how search engines work, but also how we interact with them. 

While Google lets users make their own choice in finding answers, GPT-3 draws information from multiple sources to answer questions in natural language. The issue lies in not keeping track of those sources, and providing no evidence for its answers. 

One of the researchers, named Donald Metzler, says that even the best search engines today still respond with documents that include the information asked for, not with the information itself. Search engines respond to queries with answers drawn from multiple sources at once. “It’s as if you asked your doctor for advice and received a list of articles to read instead of a straight answer.”

Researchers claim that the solution is to build and train future GPT-3s and BERTs to retain records of where the words come from. They state that no such models are able to do this yet, but it is possible in principle, and there is early work in that direction. “If it works, it would transform our search experience,” said the researchers.

Who can answer better?

It is evident that OpenAI’s GPT-3 challenges Google’s natural language processing (NLP) and the massive computing power of ML. Similar to cloud services like AWS and Microsoft Azure, Google’s BERT too gives us access to computing power on demand. While the two NLP’s share similar architecture, the large set of parameters leads to GPT-3 becoming 470 times bigger in size. 

Experts like David Weekly E have reported the fluency with which GPT-3 generates answers, which is much better than Google. 

Source: Twitter

Weekly said that he is fond of GPT-3’s ability to tackle multiple questions in ways that search engines don’t handle yet. He searches for ‘Why do we sleep? Why do we dream?’ on Google to gain the following results: 

Image

When used in GPT-3, the results were much simpler and clearer to understand in layman terms. 

Indeed, GPT-3 gives direct answers whereas Google recommends an inline answer to the question. It is evident that the answers produced in GPT-3 are well-nuanced, adding a philosophical touch, when required. However, Weekly states that GPT-3’s overconfidence can also be problematic. 

Source: Twitter

However, users found no real difference in answers generated by a Google search. 

Source: Twitter

The thread sparked more discussions:

Source: Twitter 

“Google has been SEO-bombed by many low-quality sources that optimize for pageviews. Maybe we’ll see LLMO: Large Language Model Optimization in the form of dataset poisoning to boost certain LLM responses,” a user tweeted.


GPT-3 is practically the most sophisticated NLP and NLG model trained on internet data, and which can produce high quality text output that is as good as a human written text. The article argues that GPT-3 will be able to provide a richer and more useful search experience, and thus challenge the existing search engines.

The key difference is that GPT-3 can give an output to a query as a summarised answer, almost like a human would, compared to a search engine output, which simply gives a list of most relevant links where the user can go and find the required information.

While GPT-3 is extremely large & powerful and can potentially have a lot of interesting use-cases in the NLG area, it still has limitations and risks to become a reliable search tool. I believe that it might still have some merit for factual and non-ambiguous search requirements. But, since GPT-3 potentially suffers from algorithmic bias, its inability to distinguish facts from fiction and un-explainable nature of AI algorithm, it will have its set of challenges in evolving into a full-scale search engine in its current avatar.


Share
Picture of Bhuvana Kamath

Bhuvana Kamath

I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.