AIM Banners_978 x 90

Google’s NLP-Powered Pretraining Method ALBERT Is Leaner & Meaner

Natural Language Processing (NLP) is one of the most diversified domains in emerging tech. Last year, search engine giant Google open-sourced a technique known as Bi-directional Encoder Representations from Transformers (BERT) for NLP pre-training. This model helped the researchers to train a number of state-of-the-art models in about 30 minutes on a single Cloud TPU, or in a few hours using a single GPU.      Now, the researchers at Google designed A Lite BERT (ALBERT) which is a modified version of the traditional BERT model. The latest model incorporates two-parameter reduction techniques which are factorised embedding parameterisation and cross-layer parameter sharing in order for lifting the major obstacles in scaling pre-trained models in NLP. ALBERT has outperformed the benchm
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ambika Choudhury
Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed