Uber Winds Down Its AI Labs: A Look At Some Of Their Top Work

Given the necessary cost cuts and the increased focus on core, we have decided to wind down the Incubator and AI Labs to pursue strategic alternatives for Uber Works.”

On Monday, Uber announced that they will be laying off an additional 3000 people and will also be winding down futuristic projects such as Uber AI. Uber’s AI team has been responsible for quite significant works over the past couple of years. From open-sourcing toolkits to their work on evolutionary algorithms, their work has asked questions, which other research labs were not focusing on.

Let’s take a look at some of the key contributions from the Uber’s AI team:

Deep Neuroevolution

A new technique was invented by the researchers at Uber AI labs to efficiently evolve deep neural networks. They discover that an extremely simple genetic algorithm (GA) can train deep convolutional networks with over 4 million parameters to play Atari games from pixels. Besides, on many games, it outperformed modern deep reinforcement learning (RL) algorithms or evolution strategies (ES), while also being faster due to better parallelization. 

Uber AI Labs’ work on neuroevolution offered an interesting alternative approach to have in the machine learning toolbox. In a series of five papers, the researchers introduced new algorithms that combine the optimisation power and scalability of evolution strategies (ES) with methods unique to neuroevolution that promote exploration in reinforcement learning domains via a population of agents incentivised to act differently from one another.

Loss Change Allocation For Neural Network Training

In an attempt to make neural networks more transparent, the machine learning team at Uber introduced a new metric to evaluate the learning routines of a network. They call this loss change allocation (LCA). LCA allocates changes in loss over individual parameters, thereby measuring how much each parameter learns.

AI-GAs: AI-generating Algorithms

This paper describes an exciting path that ultimately may be more successful at producing general AI. The idea is to create an AI-generating algorithm (AI-GA), that automatically learns how to produce general AI with the help of (1) meta-learning architectures, (2) meta-learning the learning algorithms themselves, and (3) generating effective learning environments. While work has begun on the first two pillars, little has been done on the third. Here, I argue that either the manual or AI-GA approach could be the first to lead to general AI, and both are worthwhile scientific endeavours irrespective of which is the fastest path.

Generating Teaching Networks

Generative Teaching Networks is a meta-learning approach for creating synthetic data, and it mainly focuses on supervised learning. GTNs are deep neural networks, which generate data as well as training environments that a learner such as a newly initialised neural network trains before being tested on a target task. 

POET (Paired Open-Ended Trailblazer)

With POET, the researchers at Uber’s AI labs demonstrated an open-ended algorithm that is capable of tackling problems in the space of two-dimensional landscapes and solution space of robotic behaviours that aim to traverse them. POET can be leveraged to build a diverse collection of solved landscapes through experiments that modify previously-solved landscapes for applying and refining previously-discovered successful robotic behaviours.

PLATO Platform

Last year, Uber open-sourced an AI platform known as Plato Research Dialogue System. Plato is integrated with deep learning techniques along with Bayesian optimisation frameworks and provides a clean and understandable design for practitioners while reducing the need to write code. 

Fiber 

Fiber is an open-source Python-based distributed computing framework for modern computer clusters built to support complex projects like POET and similar projects that required distributed computing.

Ludwig

Ludwig is built on top of TensorFlow and allows users to train and test deep learning models without writing code. In this way, Ludwig is unique in its ability to help make deep learning easier to understand and enable faster model improvement iteration cycles for experienced non-experts and researchers alike.

Pyro

Pyro is a tool for deep probabilistic modelling that has the best of modern deep learning and Bayesian modelling. The objective behind Pyro’s development is to accelerate research and applications of these techniques to make them more accessible to the broader AI community.  

Going Forward

“I had to make this decision because our very future as an essential service for the cities of the world — our being there for millions of people and businesses who rely on us — demands it.” 

-Dara Khosrowshahi, Uber CEO 

However, the whole episode cannot be just pinned down to the outbreak of COVID-19. Uber has been a part of many controversial events in the past couple of years. The co-founder of AI labs, Jeff Clune whose work we discussed above quit Uber to join back in January before the pandemic was mainstream. So, one can argue that the management approaches at Uber might have resulted in this cut downs, and COVID-19 is only the last nail in the coffin.

That said, this whole Uber ordeal exposes one thing for certain, and that is, in a crisis, the most vulnerable segment of a company is their R&D department, which is unfortunate.

Download our Mobile App

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR