MITB Banner

Google’s PaLM is Ready for the GPT Challenge 

Despite the murmurs, Google is still leading the AI race with its Pathways Language Model (PaLM), released earlier this year.
Share
Listen to this story

You may have likely heard that Google has recently issued Code Red over fears that the rising popularity of ChatGPT—which runs on GPT-3.5 architecture—will potentially threaten the advertising business of Google search. Moreover, the impending noise that GPT-4 is just around the corner has people buzzing over what is left in store for us in the coming year. Yet, despite the murmurs, Google is still leading the AI race with its Pathways Language Model (PaLM), released earlier this year. 

PaLM can be scaled up to 540 billion parameters, which means that the performance across tasks keeps increasing with the model’s increasing scale, thereby unlocking new capabilities. In comparison, GPT-3 only has about 175 billion parameters. 

Google’s language model is trained with the Pathways system, which allows it to generalise tasks across a variety of domains and tasks while also being highly efficient. Pathways is an AI architecture designed to produce general-purpose intelligent systems that can perform tasks across different domains efficiently and build models that are “sparsely activated” instead of activating the whole neural network for simple and complicated tasks alike. In addition, the system is also trained to process multiple modalities of information, such as text, images, or speech, all at once. 

Pathways allows scaling a model across tens of thousands of Google’s own TPU (Tensor Processing Unit) chips. Furthermore, PaLM surpassed the few-shot performance of prior large models, such as GPT-3 and Chinchilla, on 28 out of 29 NLP tasks—beating most on the state-of-the-art benchmarks and the average human. 

Like BERT, like PaLM

Sterling Crispin, XR Software Engineer and Product Designer, discusses that while there is a lot of chatter that there is no moat in AI and that it flattens everything, it is necessary to consider that Google Assistant is in the pockets of three billion people. This makes their TPU data centres the moat. As a result, Google will be able to produce far better results using the tonnes of data they have at their disposal.  

Crispin also adds that while there will continue to be flashy tech demos, like GPT-3 and ChatGPT, a lot is happening behind the scenes for companies like Google, Meta, and Apple that people don’t necessarily hear about. 

Take, for instance, the transformer model, BERT, developed by Google, which currently powers their search. It was an important integration to search since it enabled them to migrate from keyword-based to contextual-based results. An example here would be that previously, for the query “strategies to study well”, it produced results based on keywords such as ‘strategies’ and ‘study’, whereas, with BERT, Google search understood contexts such as ‘to’ for producing results. This way, Google has been employing BERT billions of times a day for years. 

Google’s BERT also aids in grouping several related news articles together in carousels to help users find the best articles related to a particular story. The model is also used to improve an understanding of when a searcher is looking for explicit content, thereby reducing “unexpected shocking results” for searchers by 30% in the past year, according to Google.

Thus, although invisible to users, BERT is everywhere. But, it is not alone. Google is also using Language Model for Dialog Applications (LaMDA), a conversational chatbot, most recently for Google Chat, where it summarises an ongoing conversation in the chat window. 

Google open-sourced BERT in 2018, which has now proved to be game-changing. The BERT model can extract information from large amounts of unstructured data and can be applied to create search interfaces for any library. Google has applied a similar strategy for PaLM, which is also open-source and publicly available. 

Recently, Phil Wang (lucidrains) on Github also implemented a framework that allows one to train PaLM using the same reinforcement learning strategy as ChatGPT. Basically, with it being open-source, some have already transformed PaLM into a more capable ChatGPT. 

The capabilities of PaLM can already be seen with Google’s new product, Med-PaLM, which is developed on PaLM and its instruction-tuned variation Flan-PaLM, to evaluate LLMs using MultiMedQA, an open-source model which provides datasets for multiple-choice questions, and for longer responses to questions posed by medical professionals, and non-professionals. The result, as determined by a group of doctors, showed that 92.6% of the Med-PaLM responses were on par with the clinician-generated answers (92.9%). This is a remarkable improvement especially when compared to the Flan-PaLM answers, which was deemed 61.9% in line with the scientific agreement. 

Like with BERT, we can expect Google to flip the switch for a system like PaLM, modelling a variety of its products. 

Google leading the AI game

Google search amounts to 57% of Google’s business. So, regardless of whether one is on the side of believing that GPT will disrupt search or believing the risk was overblown, it is important to consider that, currently, Google has an arsenal that is bigger and better to stand up to the challenge. One can argue here that GPT-4 is coming, but there is no official information about what it will entail. Of course, there have been absurd and overarching speculations thrown around on the internet—with users suspecting it to have somewhere between 1 trillion to 100 trillion parameters—but that doesn’t say anything about the performance of the model itself. 

Moreover, an important challenge with LLMs that OpenAI will have to address is how to retrain the models to keep the content fresh. For example, a model like ChatGPT is pre-trained on data dating back to 2021, and if OpenAI were to constantly update the model on fresh content, it would prove very expensive. Here, Google is able to fare well over OpenAI since it continually cleans its web corpus—scraping and evaluating pages to detect fresh-seeking queries. 

PS: The story was written using a keyboard.
Share
Picture of Ayush Jain

Ayush Jain

Ayush is interested in knowing how technology shapes and defines our culture, and our understanding of the world. He believes in exploring reality at the intersections of technology and art, science, and politics.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India