Papers With Code is a self-contained team within Facebook AI Research. Its open-source, community-centric approach offers researchers access to papers, frameworks, datasets, libraries, models, benchmarks, etc.
Here, we have rounded up the top 10 machine learning research papers on ‘Papers With Code.’
TensorFlow is an ML system that operates at a large scale and in heterogeneous environments. It uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. The machine learning system maps the nodes of a dataflow graph across many machines in a cluster and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs TPUs. The code is available on GitHub.
Adversarial examples are malicious inputs designed to fool machine learning models. It transfers from one model to another, allowing attackers to mount black box attacks without knowing the target model’s parameters. It is the process of explicitly training a model on adversarial examples to make it more robust to attack or reduce its test error on clean inputs.
Scikit-learn is a Python module integrating a wide range of SOTA machine learning algorithms for medium-scale ‘supervised’ and ‘unsupervised’ problems. It focuses on bringing machine learning to non-specialists using a general-purpose, high-level language. The source code and documentation are available on SciKit.
AutoML has made significant progress in recent times. However, this progress has focused mainly on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks. AutoML is expected to go further, where it can automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
MXNet is a multi-language ML library to ease the development of ML algorithms, especially for deep neural networks (DNNs). Embedded in the host language, it blends ‘declarative symbolic expression’ with imperative tensor computation. In addition, it offers auto differentiation to derive gradients. It is computation and memory efficient, and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters.
It is an open-source deepfake system created by iperov for face swapping with more than 3K forks and 13,000 stars in GitHub. DeepFaceLab provides an easy-to-use pipeline for people with no comprehensive understanding of deep learning framework or model implementation, while remains a flexible and loose coupling structure for people who need to strengthen their own pipeline with other features without writing complicated code. More than 95% of deepfake videos are created with DeepFaceLab. The code is available on GitHub.
In this paper, you can convert non-polite sentences to polite sentences while preserving the meaning. It provides a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. For politeness and five other transfer tasks, its model outperforms the SOTA methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, the model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code are available on GitHub.
Caffe provides researchers with a clean and modifiable framework for SOTA deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with MATLAB and Python bindings for training and deploying general-purpose CNNs and other deep models efficiently on commodity architectures. The source code is available on GitHub.
The paper shows pre-training is crucial to smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. The paper explores pre-trained models and transferring task knowledge from large fine-tuned models through standard knowledge distillation. As a result, the general algorithm, along with pre-trained distillation, brings improvements.
The paper describes a scalable end-to-end tree boosting system called XGBoost, used widely by data scientists to achieve SOTA results on many machine learning challenges. The source code is available on GitHub.