Does Neural Network Compression Impact Transfer Learning

The popularity of compression techniques grew with the increasing sizes of machine learning models, which grew in order to cater to the growing number of parameters (we are talking billions) among other factors. Compression comes in handy when a model…

Transformers Simplified: A Hands-On Intro To Text Classification Using Simple Transformers 

In the past few years, we have seen tremendous improvements in the ability of machines to deal with Natural Language. We saw algorithms breaking the state-of-the-art one after the other on a variety of language-specific tasks, all thanks to transformers.…

How To Build A BERT Classifier Model With TensorFlow 2.0

BERT is one of the most popular algorithms in the NLP spectrum known for producing state-of-the-art results in a variety of language modeling tasks. Built on top of transformers and seq-to-sequence models, the Bidirectional Encoder Representations from Transformers is a…

Behind Google’s BERT Implementation In Search Queries

On 25 October this year, Google announced changes in its search algorithm, a major step towards integrating natural language processing for optimising the search results. The tech giant boasts that with this tweak, they can deliver the most relevant results…

Why Transformers Play A Crucial Role In NLP Development

Recent advances in modern Natural Language Processing (NLP) research have been dominated by the combination of Transfer Learning methods with large-scale Transformer language models. Creating these general-purpose models remains an expensive and time-consuming process restricting the use of these methods…

Google Turns 21! Here’s A Look At The Search Giant’s Top 21 Machine Learning Contributions 

Google revolutionised the way the world uses the internet with its landmark PageRank algorithm. Today, after two decades, Google has grown into an AI powerhouse that generates state-of-the-art algorithms that touch almost every domain known to mankind.  As Google turns…

How Language Models Can Be Used In Real-Time Use Cases

Recent advancements in natural language processing (NLP) have touched many heights over the past few years. Pre-trained high-capacity language models such as ELMo and BERT have gained popularity in NLP.  Language modelling has been implementing in a number of applications…

BAIDU’s ERNIE 2.0 Gets NLP Top Honours, Eclipses BERT & XLNet 

Machine learning models which are deployed for vision and in natural language processing (NLP) tasks usually have more than one billion parameters. This allows for better results as the model generalises over a large wide range of parameters. Pre-trained language…

How Good Is BERT For Filling The Gap Between Accuracy Scores & Language Comprehension?

BERT has set a new benchmark for NLP tasks. And, this has been documented quite well over the past six months. Bidirectional Encoder Representations from Transformers or BERT, which was open sourced last year, offered a new ground to embattle…

How Can Memory Augmentation Work Wonders For Large Scale NLP Tasks

Current machine learning models that are deployed for vision and in natural language processing(NLP) tasks have more than a billion parameters. This allows for better results as the model generalizes over a large wide range of parameters. But there is…

More than 1,00,000 people are subscribed to our newsletter

Subscribe now to receive in-depth stories on AI & Machine Learning.