Google’s PaLM is Ready for the GPT Challenge 

Despite the murmurs, Google is still leading the AI race with its Pathways Language Model (PaLM), released earlier this year.
Listen to this story

You may have likely heard that Google has recently issued Code Red over fears that the rising popularity of ChatGPT—which runs on GPT-3.5 architecture—will potentially threaten the advertising business of Google search. Moreover, the impending noise that GPT-4 is just around the corner has people buzzing over what is left in store for us in the coming year. Yet, despite the murmurs, Google is still leading the AI race with its Pathways Language Model (PaLM), released earlier this year. 

PaLM can be scaled up to 540 billion parameters, which means that the performance across tasks keeps increasing with the model’s increasing scale, thereby unlocking new capabilities. In comparison, GPT-3 only has about 175 billion parameters. 

Google’s language model is trained with the Pathways system, which allows it to generalise tasks across a variety of domains and tasks while also being highly efficient. Pathways is an AI architecture designed to produce general-purpose intelligent systems that can perform tasks across different domains efficiently and build models that are “sparsely activated” instead of activating the whole neural network for simple and complicated tasks alike. In addition, the system is also trained to process multiple modalities of information, such as text, images, or speech, all at once. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Pathways allows scaling a model across tens of thousands of Google’s own TPU (Tensor Processing Unit) chips. Furthermore, PaLM surpassed the few-shot performance of prior large models, such as GPT-3 and Chinchilla, on 28 out of 29 NLP tasks—beating most on the state-of-the-art benchmarks and the average human. 

Like BERT, like PaLM

Sterling Crispin, XR Software Engineer and Product Designer, discusses that while there is a lot of chatter that there is no moat in AI and that it flattens everything, it is necessary to consider that Google Assistant is in the pockets of three billion people. This makes their TPU data centres the moat. As a result, Google will be able to produce far better results using the tonnes of data they have at their disposal.  

Crispin also adds that while there will continue to be flashy tech demos, like GPT-3 and ChatGPT, a lot is happening behind the scenes for companies like Google, Meta, and Apple that people don’t necessarily hear about. 

Take, for instance, the transformer model, BERT, developed by Google, which currently powers their search. It was an important integration to search since it enabled them to migrate from keyword-based to contextual-based results. An example here would be that previously, for the query “strategies to study well”, it produced results based on keywords such as ‘strategies’ and ‘study’, whereas, with BERT, Google search understood contexts such as ‘to’ for producing results. This way, Google has been employing BERT billions of times a day for years. 

Google’s BERT also aids in grouping several related news articles together in carousels to help users find the best articles related to a particular story. The model is also used to improve an understanding of when a searcher is looking for explicit content, thereby reducing “unexpected shocking results” for searchers by 30% in the past year, according to Google.

Thus, although invisible to users, BERT is everywhere. But, it is not alone. Google is also using Language Model for Dialog Applications (LaMDA), a conversational chatbot, most recently for Google Chat, where it summarises an ongoing conversation in the chat window. 

Google open-sourced BERT in 2018, which has now proved to be game-changing. The BERT model can extract information from large amounts of unstructured data and can be applied to create search interfaces for any library. Google has applied a similar strategy for PaLM, which is also open-source and publicly available. 

Recently, Phil Wang (lucidrains) on Github also implemented a framework that allows one to train PaLM using the same reinforcement learning strategy as ChatGPT. Basically, with it being open-source, some have already transformed PaLM into a more capable ChatGPT. 

The capabilities of PaLM can already be seen with Google’s new product, Med-PaLM, which is developed on PaLM and its instruction-tuned variation Flan-PaLM, to evaluate LLMs using MultiMedQA, an open-source model which provides datasets for multiple-choice questions, and for longer responses to questions posed by medical professionals, and non-professionals. The result, as determined by a group of doctors, showed that 92.6% of the Med-PaLM responses were on par with the clinician-generated answers (92.9%). This is a remarkable improvement especially when compared to the Flan-PaLM answers, which was deemed 61.9% in line with the scientific agreement. 

Like with BERT, we can expect Google to flip the switch for a system like PaLM, modelling a variety of its products. 

Google leading the AI game

Google search amounts to 57% of Google’s business. So, regardless of whether one is on the side of believing that GPT will disrupt search or believing the risk was overblown, it is important to consider that, currently, Google has an arsenal that is bigger and better to stand up to the challenge. One can argue here that GPT-4 is coming, but there is no official information about what it will entail. Of course, there have been absurd and overarching speculations thrown around on the internet—with users suspecting it to have somewhere between 1 trillion to 100 trillion parameters—but that doesn’t say anything about the performance of the model itself. 

Moreover, an important challenge with LLMs that OpenAI will have to address is how to retrain the models to keep the content fresh. For example, a model like ChatGPT is pre-trained on data dating back to 2021, and if OpenAI were to constantly update the model on fresh content, it would prove very expensive. Here, Google is able to fare well over OpenAI since it continually cleans its web corpus—scraping and evaluating pages to detect fresh-seeking queries. 

Ayush Jain
Ayush is interested in knowing how technology shapes and defines our culture, and our understanding of the world. He believes in exploring reality at the intersections of technology and art, science, and politics.

Download our Mobile App

MachineHack | AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIMResearch Pioneering advanced AI market research

With a decade of experience under our belt, we are transforming how businesses use AI & data-driven insights to succeed.

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR