MITB Banner

How A Retro Video Game Ended Up As An Ultimate Challenge For AI

Share

Artificial intelligence has been deployed to solve games for decades. Games make for brilliant testbeds for reinforcement learning—and this has been done with machine learning models for quite some time now. For instance, through DQN models, AI outperformed humans while playing games such as the once widely popular Flappy Bird. Such AI testing has also been performed on classics such as chess and the mobile Snake game.  

Given AI’s prowess in cracking games through reinforcement learning, researchers at Facebook’s AI wing have decided to leverage AI to unravel what is considered among the world’s most demanding games: the immensely complicated NetHack

One tough game

Source: Facebook

NetHack first made its debut in 1987 and has deceivingly simple visuals. It is a turn-based adventure ‘dungeon crawler’ game equipped with ASCII graphics and written primarily in C. It has players go through more than fifty dungeon levels to retrieve a magical amulet with the help of different tools (and fight monsters along the way). The game looks simple enough: with a retro vibe and symbols like @ to display players, g to show a goblin, $ to indicate gold, and lines and dots for the game’s architecture. However, NetHack has been in active development since it arrived in the late 80s, with a changing team of developers expanding upon different facets of the game. Doing so permitted hand-written codes that covered all possible player choices. 

This, ultimately, made NetHack the complicated but fascinating new challenge it is today. The game is open-ended: this means that each time an individual plays it, the game starts from scratch in an entirely new world. The challenges in NetHack could range from random mazes to rooms filled with monsters or hazardous traps. Winning over such difficulties, along with the variety of ways in which players can interact with objects and creatures in the game, require actual planning and relying on instincts picked up from previous games, and at times, even external sources such as the NetHack Wiki and online forum discussions.

This complicated nature of how NetHack generates its play got many to try developing machine learning systems that could crack the game. Many have designed bots to play NetHack with models ranging from neural networks to decision trees. Such testing of AI, especially reinforcement learning methods, could help push the generalisation limits of current state-of-the-art AI systems. A prominent player encouraging the testing of AI on NetHack is Facebook.

One interesting competition

The Conference on Neural Information Processing Systems (NeurIPS) is one of the world’s biggest AI conferences. At NeurIPS 2020, Facebook AI open-sourced the NetHack Learning Environment (NLE) NLE is based on NetHack 3.6.6 and is designed to give a reinforcement learning interface to NetHack. Through NLE, Facebook said that it wished to establish the game as one of the next big challenges for researchers in machine learning and decision making. The NLE comprises three components: a Python Interface to NetHack via OpenAI Gym API, a suite of benchmark tasks to measure agents’ progress, and a baseline agent. 

For NeurIPS 2021, Facebook has placed the NeurIPS 2021 NetHack Challenge. It invites researchers to design and train AI systems that can either reliably beat the game or—in what many consider more likely to happen—achieve as high a score as possible. The competition, which will be conducted in partnership with AICrowd, will occur from early June to October 15th 2021—and results will be announced during the leading NeurIPS conference in December this year.

Source: Facebook AI

The challenge requires participants to create agents, regardless of how (participants are also welcome to utilise methods outside of machine learning), that can play a full game of NetHack. Contestants are to use their own hardware, but the teams will be evaluated in a controlled environment—where agents will play several games, all of which will have a randomly generated character role and fantasy race. For any given set of evaluation episodes per agent, the average number of episodes where the agent has completed the game will be computed (along with median in-game and end-of-episode score). Finally, there will be three competition tracks, and contestants will automatically be ranked for any track they qualify. The tracks are: 

  1. Best Overall Agent- all submissions will be eligible for this.
  2. Best Agent Not Using a Neural Network
  3. Best Agent From An Academic/Independent Team- for best-performing agents from teams led by non-industry affiliated researchers.

Top performing teams (for each track) will be invited to submit their method videos for NeurIPS 2021 and will have the chance to participate in writing a post-competition report. 

What could this mean for AI?

Aram Doyee, an AI researcher, was working on an AI-based computer vision platform to support blind people carry out their everyday tasks and navigate their environment without running into obstacles. He decided to work on simulations instead of real-world data to test his tools—due to the ability of simulators to generate a lot of data without the cost and time collecting high-quality, real-world data requires. Doyee could not find a suitable simulator for his problem until he stumbled upon GTA-V, a popular open-world action-adventure game. The game gave him a realistic simulation of the urban environment and allowed him to extract valuable information. He said he managed to pull RGB frames from the games, along with valuable insights on instance segmentation, depth maps and optical flows. 

Video games can allow developers to extract realistic-looking data sets whilst providing state-of-the-art ML algorithms and optimisation techniques. RL gained a lot of popularity when AlphaGo, an AI developed by Deepmind, was made to play against Mr Lee Sedol, said to be the greatest Go player, and won 4 to 1 in a 2016 world championship.

Through this competition, Facebook hopes to display the NLE as a viable RL system and wishes to enable more AI/ML solutions. In a positive development, the unusual setting of basic visuals and complex gameplay that NetHack provides allows for faster training of RL agents (by 15 times) compared to the Atari benchmark. Also, using symbols instead of pixels permits AI to learn quickly without wasting computational resources on simulation dynamics—which can be pretty expensive. On the other hand, a single high-end GPU can train AI-based NetHack agents on hundreds of millions of steps per day through the TorchBeast framework, which supports further scaling through the addition of more GPUs. This gives the agent enough experience to learn, allowing researchers to spend more time testing new ideas instead of waiting for results to return. Facebook also believes that making NLE open-sourced democratises research—especially in resource-scarce environments. NetHack, therefore, allows a challenge that can help build better methods without the high computational costs of other challenging simulation environments.

As mentioned above, games have been used as benchmarks for AI for quite some time. These developments do not merely improve the game design or help figure out how to crack these challenging games. Instead, they help sharpen the performance of models and improve them enough to one day diagnose diseases or predict complicated protein structures. These games, including NetHack, employ reinforcement learning—a field also vital for traffic control systems, helping out with financial portfolios, and developing self-driving cars. According to Facebook AI, ‘Recent advances in reinforcement learning have been fuelled by simulation environments’ from games such as Minecraft or Starcraft II. These recent developments surrounding NetHack further encourage these advancements in RL systems whilst also considering complex environments, faster simulations and substantially lower costs.

Given the rapid evolution of artificial intelligence, the need for high-quality data has become more pressing than ever before. With the gaming industry simultaneously progressing, it makes sense for researchers to leverage games as simulation tools for RL and computer vision research.

Share
Picture of Mita Chaturvedi

Mita Chaturvedi

I am an economics undergrad who loves drinking coffee and writing about technology and finance. I like to play the ukulele and watch old movies when I'm free.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.