Now Reading
Top 8 Adversarial Methods For Transfer Learning

Top 8 Adversarial Methods For Transfer Learning

Ambika Choudhury

Adversarial learning is one of the most promising ways to train and secure robust deep learning networks. Transfer learning is a critical approach that enables training deep neural networks (DNN) faster and with a relatively lesser amount of data than training from scratch.

In this article, we list down the top 8 Adversarial Methods one can use for Transfer Learning. 

(The list is in alphabetical order)



1| Adversarial Discriminative Domain Adaptation (ADDA)

About: Adversarial Discriminative Domain Adaptation (ADDA) a unified framework for unsupervised domain adaptation techniques based on adversarial learning objectives. The framework is a combination of adversarial learning with discriminative feature learning. It learns a discriminative representation using the labels in the source domain and then a separate encoding maps the target data to the same space using an asymmetric mapping learned through a domain-adversarial loss.

Read here

2| Coupled Generative Adversarial Networks 

About: Coupled Generative Adversarial Networks or CoGAN can learn a joint distribution of multi-domain images without the existence of corresponding images in different domains in the training set. CoGAN consists of a tuple of GANs, each for one image domain. The CoGAN framework is inspired by the idea that deep neural networks learn a hierarchical feature representation.

Read here.

3| CyCADA: Cycle-Consistent Adversarial Domain Adaptation

About: Cycle-Consistent Adversarial Domain Adaptation or CyCADA is an adversarial unsupervised adaptation algorithm which uses cycle and semantic consistency to perform adaptation at multiple levels in a deep network. The model guides transfer between domains according to a specific discriminatively trained task and avoids divergence by enforcing consistency of the relevant semantics before and after adaptation. It is also said to provide significant performance improvements over source model baselines. 

Read here.



4| Duplex Generative Adversarial Network for Unsupervised Domain Adaptation

About: Domain adaptation attempts to transfer the knowledge obtained from the source domain to the target domain. Duplex Generative Adversarial Network for Unsupervised Domain Adaptation or DupGAN is a novel GAN architecture with duplex adversarial discriminators, which can achieve domain-invariant representation and domain transformation. According to the researchers, the model achieved state-of-the-art performance on unsupervised domain adaptation of digit classification and object recognition.

Read here.

5| Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation

About: Drop to Adapt (DTA) method for unsupervised domain adaptation despite large domain shifts. It leverages adversarial dropout to learn strongly discriminative features by enforcing the cluster assumption. In this method, the researchers leveraged a non-stochastic dropout mechanism, Adversarial Dropout (AdD), for unsupervised domain adaptation.

Read here.

6| DIRT-T: A DIRT-T Approach to Unsupervised Domain Adaptation

See Also
9 Free Resources To Learn Transfer Learning

About: Domain adaptation refers to the problem of leveraging labelled data in a source domain to learn an accurate model in a target domain where labels are either scarce or unavailable. Here, the researchers used two novel and related models, which are Virtual Adversarial Domain Adaptation (VADA) and Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T).

VADA combines domain adversarial training with a penalty term that punishes violation of the cluster assumption. And DIRT-T takes the VADA model as initialisation and employs natural gradient steps to further minimise the cluster assumption violation.

Read here.

7| Domain-Adversarial Training of Neural Networks 

About: Domain-Adversarial Training of Neural Networks or DANN is a representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Domain-Adversarial Neural Network is basically a composed deep feed-forward network that uses standard layers and loss functions, and it can be trained using standard backpropagation algorithms based on stochastic gradient descent or its modifications. 

Read here.

8| Deep Transfer Learning with Joint Adaptation Networks

About: Joint Adaptation Networks or JAN is an approach to deep transfer learning, which enables end-to-end learning of transferable representations. It learns a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. The researchers presented this approach in order to align the joint distributions of multiple domain-specific layers across domains for unsupervised domain adaptation.

Read here.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top