MITB Banner

Noted ML Researcher Exposes His Own Paper At NeurIPS, Sets Healthy Precedent

Share

The robustness of any scientific method comes from its openness to being falsifiable. And last week deep learning researcher David Duvenaud of the University of Toronto proved the same by coming clean at one of the world’s biggest platforms for artificial intelligence, the NeurIPS.

It is quite common for the reviewers to shut doors on research for being too naive or inefficient or other such issues. However, to call out one’s own work, which was initially allowed by the reviewers, is not so common.

Last month, the machine learning community had a healthy dose of iconoclasm in the form of Harvard grad, Smerity’s paper SHA-RNN, where he tried to expose the drudgery of jargon persisting around NLP in the most non-jargon way possible.

Perils Of ML Hype

At the NeurIPS 2019, Duvenaud ripped apart his own work — Neural Ordinary Differential Equations — for many shortcomings.

Right from the beginning of his talk, Duvenaud made his intentions clear by showing slides that reek of eloquence. Neural ODEs, the work under discussion, was done by Duvenaud along with his peers, Ricky TQ Chen, Yulia Rubanova and Jesse Bettencourt.

The paper got tremendous attention post-release. The paper also was awarded the best paper award at the prestigious NeurIPS 2018 conference last year.

In this paper, the researchers introduced a continuous-time analogue of normalising flows, defining the mapping from latent variables to data using ordinary differential equations (ODEs). With this model, they demonstrated that the likelihood can be computed using relatively cheap trace operations. 

I got defensive and brushed off queries in the initial stages calling it technicality. 

Here are a few key highlights from the talk where Duvenaud clears the air, in a lucid way:

via David Duvenaud talk at NeurIPS 2019

  • The primary motivation behind the paper was to impress his co-authors of the famous autograd package. Autograd is a Python library that uses backpropagation to efficiently compute gradients of functions written in plain Numpy.
  • The baselines weren’t tuned and the claims in the results(as shown above) were left unsupported.

This is 100% wrong, bad and this talk is a kick in our a## and we finally had to update the arxiv version!

  • Work on Augmented Neural ODE by Dupont e al., pointed out how neural ODE cannot be made to learning simple one-dimensional functions like f(x) = -x.

Duvenaud, in his talk, along with inaccuracies, has also mentioned how the research was blown out of proportions post-release due to its cool sounding name.

He also mentions how they worked out a cool sounding name for their paper neural ODEs instead of their initial working title — Training Infinitesimally Layered Neural Networks By Back Propagating Through Blackbox ODE Solvers!

Yes, that was the title before it became the infamous neural ODEs.

While it’s true that the Neural ODEs paper has led to a lot of ambiguity in the scientific community, it’s worth mentioning that a lot of good work has been motivated by this paper, particularly in the theoretical communities. 

Follow-up papers such as FFJORD and other works related to stochastic auto differentiation, the deluge of accolades for the original neural ODEs paper washed away any kind of criticism. And the ML community had to wait for a whole year until one of the authors decides to bring these fallacies surrounding the work, to light.

Going Forward

The unsupported claims combined with overblown hype around neural ODEs might have rubbed the mathematical community in the wrong way and rightly so. 

Machine learning, like other scientific domains, suffers from the burden of speculations and peer pleasing research circles. The uncanny way in which fashionable nonsense masked as non-malicious optimism gets diffused into the machine learning field sometimes, repels the outsiders (ex: physicists or physicians) from active participation and knowledge sharing. 

AI as a domain still has a very long way to go and the only way forward lies in transparency and in the awareness of the researchers towards their own shortcomings.

Today, David Duvenaud, in spite of his work having significance and garnering accolades, was brave enough to confess how overwhelming research in machine learning can be and how researchers can be swayed by initial success.

It is safe to say that, Duvenaud with his act has contributed more to the ML community than any of his work would ever. He set a precedent that can spark a wildfire and raze down the unwanted sophistications in the field that are decelerating the advancement. 

Watch the full video here:

PS: The story was written using a keyboard.
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed