The much-awaited GPT-4 is here. The new transformer model is touted to outperform its predecessor ChatGPT on several competitive exams while also being safer and more aligned. Many have taken this to be one more nail—or perhaps the final nail?—in the coffin of Google.
Google—for reasons known only to them—also made several announcements around the time of GPT-4’s release. However, these announcements haven’t been much of a talk in town since the only word that has captured public consciousness is GPT. Did Google really think they didn’t want to be left behind in this AI frenzy?
The big news that came from Google this week is the release of the ‘PaLM API’. Pathways Language Model, or PaLM, is a 540-billion-parameter language model, open-sourced and made publicly available by Google. Since its release last year, expectations have been that Google will soon use it to model a variety of products like it did with BERT, which now powers the entirety of our search experience. The model surpasses the performance of 175B parameterised GPT-3 as well as the undisclosed architecture of GPT-4.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The move to lease it out as an API before integrating into its Search is quite uncharacteristic of Google. But, at the same time, the push follows the huge market of Cloud that Microsoft has already gotten into with OpenAI’s GPT APIs. This is perhaps why, at the same time it released the API, Google also announced the launch of two Generative AI products on Google Cloud which will help developers build products on top of their own foundational models and others.
In addition, Google will also be giving access to ‘Makersuite’, a tool that developers can use to prototype ideas, along with exercise provisions for prompt engineering, synthetic data generation, and custom-model tuning—which Google claims will be supported by robust safety tools.
Download our Mobile App
Looking at the current state of AI as one only for big tech companies to sell their cloud business, it does make sense for them to go the Microsoft way.
Recently, Microsoft also purchased Fungible, a DPU startup, for streamlining its cloud service functions. To this purchase, add Lumenisity, a HCF solution provider, which Microsoft had acquired a month earlier. Microsoft has therefore been quite aggressively strengthening Azure.
Nearly all major cloud providers have relationships with AI chip suppliers. AWS has its own silicon and custom intel processors while Google Cloud uses Arm’s Ampere Ultra chips to augment its infrastructure. The Cloud game is too competitive to be won easily and companies are throwing blind money at it to take a lead.
Unlike Microsoft, Google has repeatedly stressed upon safe and responsible AI. This is why, instead of ransacking the internet boasting its achievement, the company is providing limited access to select testers for generative AI in Workspace, starting with Docs and Gmail. This will allow them to pressure test new experiences before releasing it broadly to end-users.
On the contrary, the Redmond-based company made headlines recently for laying off one of its responsible AI teams. “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent OpenAI models and the ones that come after them and move them into customers hands at a very high speed,” reads an article by Platformer.
In the following tweet, Emily Bender discusses a paper from 2018 which gives two recommendations for how to mitigate system bias in language models.
What was unsurprising to Bender was that even four years later—in the wake of the release of GPT-4—OpenAI failed to introduce details about the architecture (including model size), hardware, training compute, dataset construction, and training method.
Additionally, it was also important that Google set its eyes away from search and consider the current state of generative AI purely as a productivity tool, which would give users the liberty to accept, edit, or modify the suggestions.
Among other things, Google announcements this week also include ‘MedPaLM-2’, a medical language model which is an 18% improvement to its predecessor. The model, which is considered to be equivalent to an “expert” doctor level, is already being used to explore AI-assisted potentialities for ultrasound, cancer treatment planning, and tuberculosis screening.
Meanwhile, the Google-backed ‘Anthropic’ also released its own chatbot, ‘Claude’, which will be generally accessible now. The AI startup had been quietly testing the model with partners like Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Anthropic is addressing the general pitfalls of chatbots like ChatGPT—which are known for showing bias, producing harmful content as well as hallucinating—with a technique called “constitutional AI”.
While in previous techniques, tens and thousands of human feedback labels were needed, constitutional AI utilises only a list of rules or principles to be able to train less harmful AI assistants. This new technique also helps fix mistakes to AI behaviour simply by changing the principles provided, instead of fine-tuning on large RLHF datasets.
Beneath the current wave of AI hype, there is a nuanced story playing out that centres around the balance between noise and impact. It appears that Google’s primary objective—as is seen with MedPaLM—has been to leverage the AI trend to create tangible value.
Moreover, one thing that Google has believed in since its inception is to make technology do the work for us instead of making users do the work for technology. What we have seen until now with Microsoft—and its closest ally, OpenAI—is to make end-users do the ultimate work of training and improving the model by interacting with it more and more. In this light, the hope is that Google will set a precedent for others to follow suit and make the technology create value in our everyday lives.