Now Reading
5 Important Deep Learning Research Papers You Must Read


5 Important Deep Learning Research Papers You Must Read


With evolving technology, deep learning is getting a lot of attention from the organisations as well as academics. Researchers are using deep learning techniques for computer vision, autonomous vehicles, etc. In this article, we list down 5 top deep learning research papers you must read.



1| Human-Level Control Through Deep Reinforcement Learning

Authors:

Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg & Demis Hassabis

Abstract:

Here the researchers used recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network which can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. They tested this agent on the challenging domain of classic Atari 2600 games. Further, they demonstrated that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture, and hyperparameters.

Research Methodology:

The researchers created a single algorithm that would be able to develop a wide range of competencies on a varied range of challenging tasks, a central goal of general artificial intelligence which has eluded the previous efforts. To achieve this, the researchers developed a novel agent, a deep Q-network (DQN), which is able to combine reinforcement learning with a class of artificial neural network known as deep neural networks

Read here.

2| DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation, and Re-Identification of Clothing Images

Authors:

Yuying Ge, Ruimao Zhang, Lingyun Wu, Xiaogang Wang, Xiaoou Tang, and Ping Luo

Abstract:

This work represented DeepFashion2, a large-scale fashion image benchmark with comprehensive tasks and annotations. DeepFashion2 contains 491K images, each of which is richly labeled with style, scale, occlusion, zooming, viewpoint, bounding box, dense landmarks and pose, pixel-level masks, and pair of images of identical item from consumer and commercial store.

Research Methodology:

The researchers established benchmarks by covering multiple tasks in fashion understanding including clothes detection, landmark and pose estimation, clothes segmentation, consumer-to-shop verification, and retrieval. A novel Match R-CNN framework which is built upon Mask R-CNN is proposed to solve the above tasks in an end-to-end manner. Extensive evaluations are conducted in DeepFashion2. The research is focused on three aspects. First, more challenging tasks will be explored with DeepFashion2, such as synthesizing clothing images by using GANs. Second, exploring multi-domain learning for clothing images, because fashion trends of clothes may change frequently, making variations of clothing images changed. Third, introducing more evaluation metrics into DeepFashion2, such as size, runtime, and memory consumptions of deep models, towards understanding fashion images in a real-world scenario.

Read here.

3| Semi-Supervised Learning with Ladder Network

Authors:

Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko.

Abstract:

The work of this paper is built on top of the Ladder network proposed by Valpola (2015) which we extend by combining the model with supervision. The researchers showed that the resulting model reaches state-of-the-art performance in various tasks: MNIST and CIFAR-10 classification in a semi-supervised setting and permutation invariant MNIST in both semi-supervised and full-labels setting.

Research Methodology:

The work combines supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pretraining.

Read here.

4| High-Fidelity Image Generation With Fewer Labels

Authors:

Mario Lucic, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, Sylvain Gelly

See Also

Abstract:

Deep generative models are becoming a cornerstone of modern machine learning. This work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work, the researchers demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform state-of-the-art (SOTA) on both unsupervised ImageNet synthesis, as well as in the conditional setting.

Research Methodology:

The proposed model relies on a vast quantity of labeled data and is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels. In this work, the researchers take a significant step towards closing the gap between the conditional and unsupervised generation of high-fidelity images using generative adversarial networks (GANs). We leverage two simple yet powerful concepts. Firstly, self-supervised learning: a semantic feature extractor for the training data can be learned via self-supervision, and the resulting feature representation can then be employed to guide the GAN training process. Secondly, semi-supervised learning: labels for the entire training set can be inferred from a small subset of labeled training images and the inferred labels can be used as conditional information for GAN training.

Read here.

5| Fast Graph Representation Learning With PyTorch Geometric

Authors:

Matthias Fey & Jan E. Lenssen

Abstract:

This paper introduces PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds, and manifolds, built upon PyTorch. In addition to general graph data structures and processing methods, it contains a variety of recently published methods from the domains of relational learning and 3D data processing.

Research Methodology:

PyTorch Geometric achieves high data throughput by leveraging sparse GPU acceleration, by providing dedicated CUDA kernels and by introducing efficient mini-batch handling for input examples of different size. In this work, the researchers present the library in detail and perform a comprehensive comparative study of the implemented methods for homogeneous evaluation scenarios.

Read here.



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top