Now Reading
Hands-on Guide To Creating RL Agents Using OpenAI Gym Retro

Hands-on Guide To Creating RL Agents Using OpenAI Gym Retro

W3Schools

The goal of any Reinforcement learning agent is to maximize the cumulative rewards based on the goals for the provided environment. The learner is not told which actions to take but must discover which actions yield the most rewards by trying them. To develop such an agent, it is obligatory to allow the agents to experience a diverse set of environments so they can develop problem-solving skills that can be leveraged in different tasks.

The RL research community is keenly focused on using Games as an environment to train agents for a diverse set of tasks. Last year, DeepMind released OpenSpiel, with a purpose to promote the multi-agent RL systems across different games with an emphasis to learn and not to compete.

As any game is learned with experience, it provides the perfect training ground (environment) for agents whereas complex games also provide the learning ground for agents to solve hard tasks in real-life situations.



In this article, we will familiarise ourselves with the two most popular Reinforcement learning software platforms for developing and comparing RL agents for games.

Overview Of OpenAI – Universe

The universe was released in 2016 and allows the agent to use the computer as a human does use a virtual keyboard and a mouse. The universe supports more than 1000 different tasks including games, browser tasks, and flash games.

Every environment in the Universe is packed as a docker container that hosts two different servers. A VNC(Virtual Network Computing) server to send the mouse and keyboard activities and a Websocket server to send the reward signals.

For the complete infrastructure understanding of the universe please refer OpenAI-Universe

Hands-on With OpenAI – Gym Retro

Gym Retro allows turning classic video games into Reinforcement learning environments. It comes with 1000+ games and supports adding various emulators which make adding new games as an environment fairly easy.

As of now, Gym Retro supports the following emulators.

  • Sega
  • Nintendo
  • NEC
  • Atari

However, it doesn’t come with preloaded ROM(game files) and we need to download the ROMs.

For more details on Gym Retro, kindly refer OpenAI – Gym Retro

The universe and Gym Retro are easy to program and need less than 10 lines of code to create an agent and test it our choice of games(environments). The predefined environments can simply be plugged in before creating an agent.

The below code snippet creates an agent which can play the Airstriker-Genesis game.

import retro

env = retro.make(game='Airstriker-Genesis', record='.') env.reset()

done = False

while not done:

env.render()

obs, rew, done, info = env.step(env.action_space.sample())

if done:

obs = env.reset() env.close()

Here is one of the agent which just samples random moves from the state space with no prior training. The gameplay of the agent is displayed in the video below.

Better than the random agent is a rational agent that presses the buttons that do well in the game.

The rational agent can be said as a Brute force approach to the respective game and does well at lots of retro games. We have used a Brute force agent, the gameplay is displayed in the video below.

Both the platforms are based on OpenAI Gym, which is a toolkit for developing and comparing RL algorithms and was released in April 2016.

As OpenAI has deprecated the Universe, let’s focus on Retro Gym and understand some of the core features it has to offer.

Retro Gym provides python API, which makes it easy to interact and create an environment of choice.

Install Gym Retro

pip3 install gym-retro

Create Gym Environment

import retro

env = retro.make(game='Airstriker-Genesis', record='.')

That’s it you have created the learning ground for the agent.

Let’s use a random agent to play the Airstriker.

See Also

done = False

while not done:

env.render()

obs, rew, done, info = env.step(env.action_space.sample())

if done:

obs = env.reset()

env.close()

Congratulations, you have created an agent using OpenAI Gym Retro which can now play the game.

Importing ROMs

Game ROMs can be imported and added as an environment using the following command 

python3 -m retro.import /path/to/your/ROMs/directory/

Multiplayer Support

env = retro.make(game='Pong-Atari2600', players=2)

Both libraries support 1000+ game environments to train and compare RL algorithms. As most of the RL practitioners have to design the environments themselves to test the agents, Gym Retro ceases them from reinventing the wheel.

Even though the Real-life situations can go very complex, games can represent it very closely to practice and learn the Reinforcement Learning algorithms. Until now there are no such platforms for aspiring RL communities to quickly prototype and test the algorithms as we have for Machine Learning and Deep learning community.

OpenAI initiatives in this space will surely help the RL community to evolve and benchmark state of the art algorithms.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top