Alphabet CEO Sundar Pichai has announced LaMDA 2 at the Google I/O conference. Last year, the tech giant launched LaMDA built on Transformer – a neural network architecture open-sourced by Google Research in 2017.
“We are continuing to advance our conversational capabilities. Conversation and natural language processing (NLP) are powerful ways to make computers more accessible to everyone. Large language models are key to this,” said Pichai.
Sign up for your weekly dose of what's up in emerging technology.
He said more than thousands of its employees tested the second-generation LaMDA. The team has been able to reduce inaccurate or offensive responses in the new iteration.
He said LaMDA 2 can talk about almost any topic such as Saturn’s ring, a planet made of ice cream, etc. Since the launch, LaMDA has added a lot of enhancements.
“Staying on topic is a challenge for language models,” said Pichai. While demoing LaMDA 2 and AI Test Kitchen app, Google’s senior director of product management Josh Woodward said it can take a complex goal or topic and break it down into relevant sub-tasks and also gives tips, making the whole task feel a lot less daunting.
Explaining further, Pichai said you could input all kinds of goals, whether moving to a new city or learning an instrument. “These experiences show the potential of language models to one day help us with things like planning, learning about the world, and more,” said Pichai.
LaMDA still has a few teething troubles. “The model might still generate inaccurate, inappropriate or offensive responses. That’s why we are inviting feedback in the app so people can help report problems. We will be doing all of this work in accordance with our AI principles,” said Pichai at Google I/O 2022.
He said the company is looking to open up LaMDA 2 over the coming months and get feedback from a broad range of stakeholders, including AI researchers, social scientists, and human rights experts, to improve the model.
“To explore other areas of natural language processing (NLP) and AI, we recently announced a new Pathways language model or PaLM,” said Pichai. The model is trained on 540 billion parameters. He claimed the model demonstrated breakthrough performance on many NLP tasks including generating code from text, answering a math word problem, or even explaining a joke.
He said when we combine the scale with a chain of thought prompting technique, it allows us to describe multi-step problems in a series of intermediate steps. The chain of thought increases accuracy by a large margin. “This leads to SOTA performance across several reasoning benchmarks,” Pichai added. PaLM offers a new approach that holds enormous promise for making knowledge more accessible for everyone.