In the early 1960s, AI pioneer Herbert Simon observed that in a span of two decades, machines will match the cognitive abilities of humankind. Predictions like these motivated theorists, sceptics and thinkers from a cross-section of domains to find ways to use computers to perform routine tasks. From Heron’s Automatons in the first century to Google’s Deep Mind in the 21st century, mankind has yearned to make machines more ‘human’.
The latest developments in AI, especially in the applications of Generative Adversarial Networks (GANs), can help researchers tackle the final frontier for replicating human intelligence. With a new paper being released every week, GANs are proving to be a front-runner for achieving the ultimate — AGI.
Here are a few papers that verify the growing popularity of GANs:
DCGAN
This paper attempts to bridge the gap between the success of CNNs for supervised learning and unsupervised learning. A class of CNNs, called deep convolutional generative adversarial networks (DCGANs) are introduced that demonstrate that they are a strong candidate for unsupervised learning. The network is trained on various image datasets and the results show that the deep convolutional adversarial pair learn a hierarchy of representations from object parts to scenes in both the generator and discriminator.
Check the full paper here
Style GAN
An alternative generator architecture for GANs, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes such as head pose and freckles when trained on human faces. A new, highly varied and high-quality dataset of human faces were introduced with this paper.
Check full paper here
CycleGAN
A method for the unpaired image to image translation was illustrated in this paper. The goal here was to learn the mapping between input and output images in the absence of paired examples. And, then the inverse mapping is performed to maintain the quality of the image.
Check the full paper here
Conditional GANs
A conditional version of generative adversarial nets, which can be constructed by simply feeding the data that will be conditioned on to both the generator and discriminator. This model can generate MNIST digits conditioned on class labels and could be used to learn a multi-modal model and provide preliminary examples of an application to image tagging, which further will enable generate descriptive tags which are not part of training labels.
Check the full paper here
Progressive Growing GAN (PGGANs)
This paper focuses on how to grow both the generator and discriminator progressively: starting from a low resolution, and by adding new layers as training progresses. This results in speeding up of training and also stabilising it. While producing high-quality images.
This paper also describes several implementation details that are important for discouraging unhealthy competition between the generator and discriminator.
Check the full paper here
Large Scale GAN
This paper is about training the Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. The approach here enables fine control over the trade-off between efficacy and variety by reducing the variance of the Generator’s input.
Check the full paper here
StackGAN
Stacked Generative Adversarial Networks (StackGAN) are used to generate 256×256 photo-realistic images conditioned on text descriptions. The hard problem is decomposed into more manageable sub-problems through a sketch-refinement process.
The results of primitive shapes and colours of the object based on the given text description are taken from one stage and are used to generate high-resolution images with photo-realistic details.
Check the full paper here
Evolutionary GANs
This paper demonstrates how to utilise different adversarial training objectives as mutation operations and evolve a population of generators to adapt to the environment (i.e., the discriminator).
E-GAN overcomes the limitations of an individual adversarial training objective and always preserves the best offspring.
Check the full paper here