Now Reading
Top 10 ML Papers On Papers With Code

Top 10 ML Papers On Papers With Code

  • Papers With Code is a self-contained team within Facebook AI Research
Paper with code

Papers With Code is the go-to resource for the latest SOTA ML papers, code, results for discovery and comparison. The platform consists of 4,995 benchmarks, 2,305 tasks, and 49,190 papers with code.

Besides Papers With Code, other notable machine learning research papers’ resources and tools include arXiv Sanity, 42 Papers, Crossminds, Connected Papers etc. 

Deep Learning DevCon 2021 | 23-24th Sep | Register>>

Papers With Code is a self-contained team within Facebook AI Research. Its open-source, community-centric approach offers researchers access to papers, frameworks, datasets, libraries, models, benchmarks, etc. 

Here, we have rounded up the top 10 machine learning research papers on ‘Papers With Code.’ 

TensorFlow: A system for large-scale machine learning

TensorFlow is an ML system that operates at a large scale and in heterogeneous environments. It uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. The machine learning system maps the nodes of a dataflow graph across many machines in a cluster and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs TPUs. The code is available on GitHub

Looking for a job change? Let us help you.

Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learning models. It transfers from one model to another, allowing attackers to mount black box attacks without knowing the target model’s parameters. It is the process of explicitly training a model on adversarial examples to make it more robust to attack or reduce its test error on clean inputs. 

Scikit-learn: Machine Learning in Python

Scikit-learn is a Python module integrating a wide range of SOTA machine learning algorithms for medium-scale ‘supervised’ and ‘unsupervised’ problems. It focuses on bringing machine learning to non-specialists using a general-purpose, high-level language. The source code and documentation are available on SciKit

AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

AutoML has made significant progress in recent times. However, this progress has focused mainly on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks. AutoML is expected to go further, where it can automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. 

MXNet: A Flexible & Efficient Machine Learning Library for ‘Heterogeneous Distributed Systems

MXNet is a multi-language ML library to ease the development of ML algorithms, especially for deep neural networks (DNNs). Embedded in the host language, it blends ‘declarative symbolic expression’ with imperative tensor computation. In addition, it offers auto differentiation to derive gradients. It is computation and memory efficient, and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. 

DeepFaceLab: A simple, flexible and extensible face-swapping framework

It is an open-source deepfake system created by iperov for face swapping with more than 3K forks and 13,000 stars in GitHub. DeepFaceLab provides an easy-to-use pipeline for people with no comprehensive understanding of deep learning framework or model implementation, while remains a flexible and loose coupling structure for people who need to strengthen their own pipeline with other features without writing complicated code. More than 95% of deepfake videos are created with DeepFaceLab. The code is available on GitHub

Politeness Transfer: A Tag and Generate Approach

In this paper, you can convert non-polite sentences to polite sentences while preserving the meaning. It provides a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. For politeness and five other transfer tasks, its model outperforms the SOTA methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, the model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code are available on GitHub.

Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides researchers with a clean and modifiable framework for SOTA deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with MATLAB and Python bindings for training and deploying general-purpose CNNs and other deep models efficiently on commodity architectures. The source code is available on GitHub

Well-Read Students Learn Better: ‘On the Importance of Pre-training Compact Models

The paper shows pre-training is crucial to smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. The paper explores pre-trained models and transferring task knowledge from large fine-tuned models through standard knowledge distillation. As a result, the general algorithm, along with pre-trained distillation, brings improvements.

XGBoost: A Scalable Tree Boosting System

The paper describes a scalable end-to-end tree boosting system called XGBoost, used widely by data scientists to achieve SOTA results on many machine learning challenges. The source code is available on GitHub

More trending machine learning research papers on ‘Papers With Code’ can be found here

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top