Active Hackathon

FB’s New Python Library Nevergrad Provides A Collection Of Algorithms That Don’t Require Gradient Computation

Nevergrad, an open-sourced Python3 toolkit by Facebook for developers offers an extensive collection of algorithms to avoid gradient optimization and present them in a standard ask-and-tell Python framework. The platform enables AI researchers, machine learning scientists, and enthusiasts whose work involves derivative-free optimization to implement state-of-the-art algorithms and methods to compare performance in different settings.

The library includes a broad range of optimizers, such as FastGA, Covariance matrix adaptation, Particle Swarm Optimisation, Sequential quadratic programming, etc. The working of this algorithm can be described as multiple points sampled in one function space, a population of best points is selected and these new points are objected around the existing points in order to improve the current population.


Sign up for your weekly dose of what's up in emerging technology.


This tool is a Python 3.6+ library and can be installed with the following code

pip install nevergrad

One can also install the master branch instead of the latest release with the following code

pip install git+

For cloning the repository, run the following code from inside the repository folder

python3 develop

Solving ML Problems

The gradient-free optimizations in Nevergrad are used to solve a variety of machine learning problems such as discussed below:

    • Noisy problems like independent episodes in reinforcement learning
    • Discrete, continuous or mixed problems like power systems or tasks with neural networks that require simultaneously choosing the learning rate per layer, weight decay per layer, etc
    • Multimodal problems such as those that have several minima, for instance, hyper-parametrization of deep learning for language modelling
  • Separable or rotated problems
  • Partially separable problems like architecture search for Deep Learning

Goals Of This Toolkit

There are basically four goals of this package which are mentioned below:

  • To provide gradient/derivative-free optimization algorithms including algorithms that are able to handle noise.
  • To provide tools to instrument any code
  • To provide functions on which to test the optimization algorithms
  • To provide benchmark routines in order to compare algorithms easily

There are also some other features, which are mentioned below:

  • Optimization: It helps in implementing optimization algorithms
  • Instrumentation: It helps in tooling in order to convert code into a well-defined function to optimize
  • Functions: It helps in implementing both simple and complex benchmark functions
  • Benchmark: It helps for running the experiments comparing the algorithms on benchmark functions
  • Common: This is a set of tools available throughout the package


  • The open-sourced Python3 toolkit can be used in machine learning to tune the hyper-parameters such as momentum, weight decay, dropout, layer parameters, etc. for each part of the deep network, or others
  • It can also be used in power grid management, aeronautics, and many other scientific and engineering applications

Similar Toolkits


This open-sourced Python tool for derivative-free optimization is a mathematical technique which is used for simulation-based optimization. The optimization algorithms implemented in this tool relies on a surrogate model of the unknown performance measure. The algorithms are known as derivative-free because they basically do not require the computation of the performance measure, unlike traditional optimization algorithms.

Click here to get the code.


Zeroth-Order Optimisation (ZOOpt) provides an efficient derivative-free solver and can be easily used. This toolbox provides a Python package for single-thread optimization and a light-weighted distributed version with the help of Julia language for Python described functions. It mainly focuses on optimization problems in machine learning, addressing high-dimensional, noisy as well as large-scale problems. It does not rely on the gradient of the objective function, but instead, learns from samples of the search space.

Click here to get the code.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
How Data Science Can Help Overcome The Global Chip Shortage

China-Taiwan standoff might increase Global chip shortage

After Nancy Pelosi’s visit to Taiwan, Chinese aircraft are violating Taiwan’s airspace. The escalation made TSMC’s chairman go public and threaten the world with consequences. Can this move by China fuel a global chip shortage?

Another bill bites the dust

The Bill had faced heavy criticism from different stakeholders -citizens, tech firms, political parties since its inception

So long, Spotify

‘TikTok Music’ is set to take over the online streaming space, but there exists an app that has silently established itself in the Indian market.