Top 10 Papers On Transfer Learning One Must Read In 2020

Transfer Learning has recently gained attention from researchers and academia and has been successfully applied to various domains. This learning is an approach to transferring a part of the network that has already been trained on a similar task while adding one or more layers at the end, and then re-train the model.

In this article, we list down the top 10 researchers papers on transfer learning one must read in 2020. (The papers are listed according to the year of publishing)

1| Pay Attention to Features, Transfer Learn Faster CNNs

About: Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pre-trained on large datasets. In this paper, the researchers proposed attentive feature distillation and selection (AFDS), which not only adjusts the strength of transfer learning regularisation but also dynamically determines the important features to transfer. 

According to the researchers, by deploying AFDS on ResNet-101, a state-of-the-art computation reduction has been achieved at the same accuracy budget, outperforming all existing transfer learning methods.

Click here to read.

2| A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning

About: One security vulnerability of transfer learning is that pre-trained models, also referred to as teacher models, are often publicly available. This means that the part of the model transferred from the pre-trained model is known to potential attackers. 

In this paper, the researchers showed that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. To evaluate the proposed attack, the researchers performed a set of experiments on face recognition and speech recognition tasks to show the effectiveness of the attack.

Click here to read.

3| Adversarially Robust Transfer Learning

About: The purpose of this paper is to study the adversarial robustness of models produced by transfer learning. To demonstrate the power of robust transfer learning, the researchers transferred a robust ImageNet source model onto the CIFAR domain, achieving both high accuracy and robustness in the new domain without adversarial training. 

They further used visualisation methods to explore properties of robust feature extractors. According to the researchers, they constructed and improved the generalisation of a robust CIFAR-100 model by roughly 2% while preserving its robustness.

Click here to read.

4| Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimisation

About: In this paper, the researchers proposed a novel transfer learning method to obtain customised optimisers within the well-established framework of Bayesian optimisation and allowed the algorithm to utilise the proven generalisation capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. 

According to the researchers, the results show that the algorithm automatically identifies the structural properties of objective functions from available source tasks or simulations, performs favourably in settings with both scarce and abundant source data, and falls back to the performance level of general AFs if no particular structure is present.

Click here to read.

5| DT-LET: Deep Transfer Learning By Exploring Where To Transfer

About: In this paper, the researchers proposed a new mathematical model named Deep Transfer Learning By Exploring Where To Transfer (DT-LET) to solve this heterogeneous transfer learning problem. In order to select the best matching of layers to transfer knowledge, the researchers defined specific loss function to estimate the corresponding relationship between high-level features of data in the source domain and the target domain.   

Click here to read.

6| A Survey on Deep Transfer Learning

About: This survey focuses on reviewing the current research of transfer learning by using deep neural networks (DNN) and its applications. The researchers defined deep transfer learning, its category and reviewed the recent research works based on the techniques used in deep transfer learning.

Click here to read.

7| A Study on CNN Transfer Learning for Image Classification

About: In this paper, the researchers proposed a system which uses a Convolutional Neural Network (CNN) model called Inception-v3. It was first trained on a base dataset called ImageNet and is then repurposed to learn features or transfer them in order to be trained on a new dataset such as CIFAR-10 and Caltech Faces. The researchers investigated whether it would work best in terms of accuracy and efficiency with new image datasets via Transfer Learning.

Click here to read.

8| A Survey of Transfer Learning

About: This is a survey paper aimed to provide insights into transfer learning techniques to the emerging tech community by overviewing related works, examples of applications that are addressed by transfer learning, and issues and solutions that are relevant to the field of transfer learning. The research provides an overview of the current methods being used in the field of transfer learning as it pertains to data mining tasks for classification, regression, and clustering problems.

Click here to read.

9| A Survey on Transfer Learning

About: This survey focuses on categorising and reviewing the current progress on transfer learning for classification, regression and clustering problems. In this survey, the researchers discussed the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. The researchers also explored some potential future issues in transfer learning research.

Click here to read.

10| Self-taught Learning: Transfer Learning from Unlabeled Data

About: In this paper, the researchers presented a new machine learning framework called “self-taught learning” for using unlabeled data in supervised classification tasks. This approach to self-taught learning uses sparse coding to construct higher-level features using the unlabeled data where the features form a succinct input representation and significantly improve classification performance.

Click here to read.

Download our Mobile App

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

Career Building in ML & AI

31st May | Online

Rakuten Product Conference 2023

31st May - 1st Jun '23 | Online

MachineCon 2023 India

Jun 23, 2023 | Bangalore

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox