MITB Banner

Top 10 ML Papers On Papers With Code

Papers With Code is a self-contained team within Facebook AI Research

Share

Paper with code

Papers With Code is the go-to resource for the latest SOTA ML papers, code, results for discovery and comparison. The platform consists of 4,995 benchmarks, 2,305 tasks, and 49,190 papers with code.

Besides Papers With Code, other notable machine learning research papers’ resources and tools include arXiv Sanity, 42 Papers, Crossminds, Connected Papers etc. 

Papers With Code is a self-contained team within Facebook AI Research. Its open-source, community-centric approach offers researchers access to papers, frameworks, datasets, libraries, models, benchmarks, etc.

Here, we have rounded up the top 10 machine learning research papers on ‘Papers With Code.’ 

1. TensorFlow: A system for large-scale machine learning

TensorFlow is an ML system that operates at a large scale and in heterogeneous environments. It uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. The machine learning system maps the nodes of a dataflow graph across many machines in a cluster and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs TPUs. The code is available on GitHub

2. Adversarial Machine Learning at Scale

Adversarial examples are malicious inputs designed to fool machine learning models. It transfers from one model to another, allowing attackers to mount black box attacks without knowing the target model’s parameters. It is the process of explicitly training a model on adversarial examples to make it more robust to attack or reduce its test error on clean inputs.

3. Scikit-learn: Machine Learning in Python

Scikit-learn is a Python module integrating a wide range of SOTA machine learning algorithms for medium-scale ‘supervised’ and ‘unsupervised’ problems. It focuses on bringing machine learning to non-specialists using a general-purpose, high-level language. The source code and documentation are available on SciKit.

4. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch

AutoML has made significant progress in recent times. However, this progress has focused mainly on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks. AutoML is expected to go further, where it can automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.

5. MXNet: A Flexible & Efficient Machine Learning Library for ‘Heterogeneous Distributed Systems

MXNet is a multi-language ML library to ease the development of ML algorithms, especially for deep neural networks (DNNs). Embedded in the host language, it blends ‘declarative symbolic expression’ with imperative tensor computation. In addition, it offers auto differentiation to derive gradients. It is computation and memory efficient, and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters.

6. DeepFaceLab: A simple, flexible and extensible face-swapping framework

It is an open-source deepfake system created by iperov for face swapping with more than 3K forks and 13,000 stars in GitHub. DeepFaceLab provides an easy-to-use pipeline for people with no comprehensive understanding of deep learning framework or model implementation, while remains a flexible and loose coupling structure for people who need to strengthen their own pipeline with other features without writing complicated code. More than 95% of deepfake videos are created with DeepFaceLab. The code is available on GitHub.

7. Politeness Transfer: A Tag and Generate Approach

In this paper, you can convert non-polite sentences to polite sentences while preserving the meaning. It provides a dataset of more than 1.39 instances automatically labeled for politeness to encourage benchmark evaluations on this new task. For politeness and five other transfer tasks, its model outperforms the SOTA methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, the model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code are available on GitHub.

8. Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides researchers with a clean and modifiable framework for SOTA deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with MATLAB and Python bindings for training and deploying general-purpose CNNs and other deep models efficiently on commodity architectures. The source code is available on GitHub.

9. Well-Read Students Learn Better: ‘On the Importance of Pre-training Compact Models

The paper shows pre-training is crucial to smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. The paper explores pre-trained models and transferring task knowledge from large fine-tuned models through standard knowledge distillation. As a result, the general algorithm, along with pre-trained distillation, brings improvements.

10. XGBoost: A Scalable Tree Boosting System

The paper describes a scalable end-to-end tree boosting system called XGBoost, used widely by data scientists to achieve SOTA results on many machine learning challenges. The source code is available on GitHub.

More trending machine learning research papers on ‘Papers With Code’ can be found here

Popular Posts

  1. Top Object Detection Algorithms
  2. Top Chart GPT Alternatives
  3. Top Ethical Hacking Courses
  4. Top AI Powered Tools for Stock Market Trading
  5. Top Library in CC for Machine Learning
Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.