Listen to this story
At Google I/O event, the company unveiled PaLM 2, a next-generation language model. PaLM 2 represents a significant improvement over its predecessor, PaLM, and introduces several new capabilities that set it apart from OpenAI’s GPT-4.
One of the key advantages of PaLM 2 is its availability in smaller sizes, such as Gecko, Otter, Bison, and Unicorn, which are specifically optimised for applications with limited processing power. These smaller models enable PaLM 2 to cater to a wider range of devices and products, including mobile devices that can run the lightweight Gecko model even offline. This flexibility in model sizes gives PaLM 2 an edge in terms of accessibility and deployment.
Google claims that PaLM 2 demonstrates enhanced reasoning capabilities compared to GPT-4, particularly in tasks like WinoGrande and DROP, with a slight advantage in ARC-C as well. However, it’s important to note that direct comparisons between the two models can be challenging due to differences in the presentation of test results. Additionally, Google has chosen to omit some comparisons where PaLM 2 performed less favourably, raising questions about the completeness of the assessment.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
In terms of mathematical abilities, PaLM 2 shows improvements according to Google’s research paper. While the exact size of PaLM 2’s largest model, PaLM 2-L, remains undisclosed, Google has stated that it is significantly smaller than PaLM’s 540 billion parameters. This suggests that PaLM 2-L is likely smaller than GPT-3.5, but it still competes well with GPT-4, delivering impressive performance in various tasks.
Bard’s new features also make it the better choice for research. It provides more concise summaries and improved sourcing. Users can now quickly access the core information of a topic and easily identify which parts of the response match specific sources by clicking on number tags to the corresponding sections in the linked sources. This helps when conducting research or writing essays requiring specific knowledge and detailed citations. These updates address the limitations of AI tools in verifying real-world information and enhance Bard’s research capabilities.
While Google doesn’t disclose the exact size of PaLM 2’s training dataset, the company emphasises a focus on mathematics, logic, reasoning, and science. PaLM 2’s pre-training corpus consists of a diverse range of sources, including web documents, books, code, mathematics, and conversational data. Moreover, PaLM 2 has been trained in over 100 languages, enhancing its contextual understanding and translation capabilities.
In contrast, OpenAI has trained GPT-4 using publicly available data and licensed data. GPT-4 aims to generate a wide range of responses and has been fine-tuned using reinforcement learning with human feedback, aligning its behaviour with user intent.
Both PaLM 2 and GPT-4 can be accessed through their respective chatbots, Bard and ChatGPT. Bard is freely available worldwide, while ChatGPT Plus, featuring GPT-4, is behind a paywall. However, GPT-4 can also be accessed for free through Microsoft’s Bing AI Chat, which utilises the model. This accessibility plays a role in the potential adoption of PaLM 2, as it is an open-source model.
Google has integrated PaLM 2 into more than 25 of its products, including Android and YouTube, while Microsoft has also incorporated AI features into its Office suite and various services. Although GPT-4 has gained traction among developers and startups due to its early release and refinement, the open-source nature of PaLM 2 may attract a wider range of users.
As PaLM 2 is a relatively new model, its ability to compete with GPT-4 is still being assessed. Google’s ambitious plans and the unique capabilities of PaLM 2 suggest that it could present a formidable challenge to GPT-4. However, GPT-4 remains a capable model, outperforming PaLM 2 in several comparisons. Nevertheless, PaLM 2’s smaller models, especially lightweight options like Gecko, give it an advantage, particularly for mobile devices.
With the introduction of PaLM 2 and Google’s ongoing development of the multimodal AI model Gemini, the competition for AI dominance has intensified. Google’s commitment to advancing AI technologies indicates a continued drive to innovate and challenge established players like GPT-4. The future will reveal how these language models evolve and how they shape the landscape of natural language processing and AI as a whole.