MITB Banner

Is Reinforcement Learning Still Relevant?

While there are various practical applications of reinforcement learning, the concept as a whole poses some limitations when used in developing autonomous machine intelligence

Share

Listen to this story

Intelligence consists of various aspects like learning, reasoning, and planning. Human beings, for example, have behavioural, social, and general intelligence, which can be simply termed as common sense. The dichotomy of whether these things are learned or are innately present in living beings, makes us question whether reinforcement learning (RL) or self-supervised learning (SSL) is the way forward towards artificial general intelligence (AGI). 

Researchers and scientists are split about using reinforcement learning or SSL for developing artificial general intelligence. While Google’s DeepMind has been making great progress using reinforcement learning, Meta AI has been continually pushing for self-supervised or unsupervised learning with Tesla jumping on the bandwagon as well.

DeepMind’s famous paper ‘Reward is Enough’ claims that intelligence can be achieved by working on the principle of ‘reward maximisation’, which is essentially expansion of reinforcement learning algorithms and is, arguably, the closest to natural intelligence.

“If an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour,” said researchers at DeepMind.

Yann LeCun from Meta AI has constantly been talking about how the trial-and-error method of RL for developing intelligence is a risky way forward. For example, a baby does not identify objects around by looking at million samples of the same object, or trying dangerous things and learning from them, but instead by observing, predicting, and interacting with them even without supervision.

DeepMind says that by understanding the mammalian vision and implementing neuroscience using computer vision, we can probably categorise objects and differentiate them, but these are constrained to narrow artificial intelligence—systems designed to solve specific problems and not generate general solving abilities.

DeepMind’s David Silver considers that a continual reinforcement learning framework that aims to maximise reward in a cycle “is” enough to produce attributes of human intelligence, like perception, language, and memory. 

Recently, OpenAI used reinforcement learning from human intervention and feedback finetuned GPT-3. The new model, called InstructGPT, is extremely good at generating intentful text from single-sentence prompts. DeepMind has also developed groundbreaking models using reinforcement learning like AlphaGo, AlphaFold, and MuZero.

Reinforcement learning pitfalls 

A dog, when fed with treats after performing a task, remains obedient. This simple explanation of positive reinforcement makes researchers confident that AI can probably also be trained this way. While still in the development stages, reinforcement learning in machines can be quite challenging (a dog has an innate nature or emotions of being obedient).

While there are various practical applications of reinforcement learning, the concept as a whole poses some limitations when used in developing autonomous machine intelligence.

  1. It requires a huge amount of data and computation
  2. Noise in data is one of the major problems with this method of learning. Small training changes can make create large difference in testing results 
  3. Large amount of hyperparameters makes the algorithm hard to tune. A lot of hyperparameters are for shaping the reward which can make the training data biassed as well
  4. Sample inefficiency makes it difficult to train in the real world. For example, as this method does not use CNNs for measuring image or state space, it can take weeks to train an agent to walk even in a simulated environment
  5. Unpredictability of simulation trained agents in the real world
  6. Trial-and-error can be very costly and inefficient when trained in the real world
  7. Assumption that the agent has a finite number of actions (Markov Model)

While reinforcement learning delivers decisions by creating a simulation of a system, training an AI model on a labelled dataset is limiting as the world is not available as a labelled dataset. It is also part of the training process that takes place after the model is deployed and already working.

Amalgamation of SSL & RL 

Researchers agree that installing background knowledge in machines might be the way forward for AGI, however the concept of “background” knowledge is unexplainable. It is not completely evident, deriving the meaning of consciousness from animals, that the majority of the things are learnt with time or are part of our innate mechanism.

Autonomous machine intelligence is the common goal in both these approaches, but with reinforcement training there is always a human agent driving the working of the machine, while unsupervised learning proposes to learn from observation. Self-supervised learning advocates talk about the inefficiency of trial-and-error methods but uncertainty still remains a major barrier for self-supervised learning.

Sergey Levine from Berkeley AI Research recently proposed a solution of combining self-supervised learning with offline-reinforcement learning, that explores the possibility of enabling models to understand the world without supervision and allow reinforcement learning to explore causal understanding of the world, thus expanding the dataset close to infinite.

Yann LeCun proposed the World Model in paper in June 2022, which uses a “cost module” in its architecture that measures the energy-cost of an action by the machine. When reinforcement learning is scaled on larger datasets, the reward maximisation also needs further scaling. If the cost module can be implemented with the reward mechanism of reinforcement learning, the architecture will be able to produce maximum outcomes for as little “energy” as possible, which seems like a plausible way forward.

Share
Picture of Mohit Pandey

Mohit Pandey

Mohit dives deep into the AI world to bring out information in simple, explainable, and sometimes funny words. He also holds a keen interest in photography, filmmaking, and the gaming industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.