MITB Banner

Ambitions to become GitHub for machine learning? Hugging Face adds Decision Transformer to its library

Over the last few years, the company has open-sourced a number of libraries and tools, especially in the NLP space.

Share

Hugging Face is one of the most promising companies in the world. It has set to achieve a unique feat – to become the GitHub for machine learning. Over the last few years, the company has open-sourced a number of libraries and tools, especially in the NLP space. Now, the company has integrated Decision Transformer, an offline reinforcement learning method, into the transformers library and the Hugging Face Hub.

What are decision transformers

Decision transformers were first introduced by Chen L. and his team in the paper ‘Decision Transformer: Reinforcement Learning via Sequence Modelling’. This paper introduced this framework that abstracts reinforcement learning as a sequence modelling problem. Unlike previous approaches, Decision Transformers output the optimal actions by leveraging a causally masked Transformer. A Decision Transformer can generate future actions that achieve desired return by conditioning an autoregressive model on desired reward, past states, and actions. The authors concluded that despite the simple design of this transformer, it matches, even exceeds, the performance of the state-of-art model and free offline reinforcement learning baselines on Atari, OpenAI Gym, and Key-to-Door tasks.

Decision Transformer architecture

The idea of using a sequence modelling algorithm is that instead of training a policy using reinforcement methods that would suggest action to maximise the return, Decision Transformers generate future actions based on a set of desired parameters. It is a shift in the reinforcement learning paradigm since the user is using a generative trajectory modelling to replace conventional reinforcement learning algorithms. The important steps involved in this are – feeding the last K timesteps in the Decision Transformer with three inputs (return-to-go, state, action); embedding the tokens with a linear layer (if the state is a vector) or CNN encoder if it is a frame; processing the inputs by GPT-2 model that predicts future actions through autoregressive modelling.

Offline reinforcement learning

Reinforcement learning is a framework to build decision making agents that learn optimal behaviour by interacting with the environment via trial and error method. The ultimate goal of an agent is to maximise the cumulative reward called return. One can say that reinforcement learning is based on the reward hypothesis and all the goals are the maximisation of the expected cumulative reward. Most reinforcement learning techniques are geared in the online learning setting, where the agents interact with the environment and gather information using current policy and exploration schemes to find higher-reward areas. The drawback with this method is that the agent has to be trained directly in the real world or have a simulator. In case a simulator is not available, one would be required to build it, which is a very complex process. Simulators may even have flaws that can be exploited by agents to gain a competitive advantage.

Credit: Hugging Face

This problem is present in the case of offline reinforcement learning. In this case, the agent only uses the data collected from other agents or human demonstrations without interacting with the environment. Offline reinforcement learning learns skills only from previously collected datasets without active environment interaction and provides a way to utilise previously collected datasets from sources like human demonstrations, prior experiments, and domain-specific solutions.

GitHub for machine learning

Hugging Face’s startup journey has been nothing short of being phenomenal. The company, which started as a chatbot, has gained massive attention from the industry in a very short period; big companies like Apple, Monzo, and Bing use their libraries in production. Hugging Face’s transformer library is backed by PyTorch and TensorFlow, and it offers thousands of pretrained models for tasks like text classification, summarisation, and information retrieval.

In September last year, the company released Datasets, a community library for contemporary NLP, which contains 650 unique datasets and more than 250 contributors. With Datasets, the company aims at standardising end-user interface, versioning, and documentation. This sits well with the company’s larger vision of democratising AI, which would extend the benefits of emerging technologies to smaller technologies, which is otherwise concentrated in a few powerful hands.

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India