AlphaGo from DeepMind has been the buzzword for AI mastery over games in recent times. From beating South Korean professional Go player, Lee Sedol in 2016 to repeating the feat in 2017 by beating Chinese professional Go player Ke Jie, AlphaGo has long since asserted its dominance over humans. But after that, it seems to have retired from the ‘sport’ and is looking forward to seeing other applications.
But what is the next challenge for AI after mastering Go?
The use of vintage computer games to train AI has caught on with companies and scientists alike. From multiplayer eSports to first player shooters, AI is being trained to adapt to their challenging environments that may one day help in negotiating real-life scenarios.
Here are a few examples of how AI is acing popular computer games
Space Invasion: StarCraft II
After the retirement of AlphaGo, DeepMind has turned its sights towards StarCraft. DeepMind, in collaboration with Blizzard Entertainment, has turned the game into a research platform for AI.
In 2017, a package to train AI to play the game was released. The tool kit included dataset comprising a collection of over 65,000 replays from past StarCraft II professional games to help the AI learn human strategy. It also included a collection of mini-games that separate gameplay components such as resource collection and exploration to perfect specific functions.
However, AI repeating the success of Go is still a long shot. In a first of its kind StarCraft matchup between real and AI players held at Seoul’s Sejong University in 2017, Song Byung-gu, a professional StarCraft player, defeated all four bots in less than 30 minutes. Among the fallen was CherryPi, an AI bot designed by Facebook. But the AI bots did have their moments. A Norwegian AI bot was seen carrying out nearly 19,000 functions per minute as against a few hundred by human players. On the defensive abilities of the bots, Song commented, “The way they managed their units when they defended against my attacks was stunning at some points.”
Mystical Warfare: Defense Of The Ancients 2
Elon Musk may have quit the board ethics research group OpenAI now, but before he did, the bot developed by the group made news for demolishing Danylo “Dendi” Ishutin, a Dota 2 champion in two straight rounds at an exhibition match held as a part of The International in 2017.
The match was played with considerably low stakes of one on one format, unlike the original format where opposing teams comprise of five players each.
The bot learned the process of tentation rather than extensive programming. Playing against itself in thousands of games, the bot imbibed responses that helped it win and discarded the ones that resulted in death. Unassisted, the bot uncovered strategies and devised new plays that is being sought to incorporate in the gameplay of professionals.
In spite of forcing Ishutin to forfeit with its gameplay at the end of the second round, the technology however is still a work in progress. The following month, it was reported that Dominik “Black” Reitmeier, a professional Dota 2 player had beaten the AI three times.
Open AI says that will continue to develop the software to participate in unreduced games. It also intends to mobilise a team of five AI bots to play the game in its original five on five format. Also, a mixed team of AI and human players is a possibility in the future.
Fighting To The Death: DOOM
Teaching bots to play the first player shooter game (FSP), Doom, is a way in which scientists are taking deep reinforcement learning to a whole new level. Despite being an inferior game in terms of graphics when compared to many newer FSPs, Doom continues to be a challenge even after over 20 years because of the limited area of visibility of its 3-D environment.
Devendra Chaplot and Guillaume Lample, two PhD students at Carnegie Mellon University’s School of Computer Science, employed deep-learning methods to train Arnold the AI agent to navigate the game’s maze-like environment. A unique architecture combining various current techniques, such as a Deep Q-Network for navigation and a Deep Recurrent Q-Network, for tracking opponent movements and predicting where to shoot, was created to train Arnold.
They had presented a paper on their work before participating in the Visual Doom AI Competition, where AI agents compete against each other in deathmatches held in Greece in September 2016. Arnold stood second in both the tracks of the game — the ‘Limited Deathmatch’ track as well as the ‘Full Deathmatch’ track. However, bots from Facebook and Intel reigned supreme and beat Arnold to the first place. But in the latest edition of the competition, Arnold and a few other bots managed to leave Facebook and Intel behind.
Holding ‘Em At Poker
An exciting recent development in this space is AI venturing into Poker. Libratus, an AI developed by a professor-student team from Carnegie Mellon University, defeated top human players at head’s up no-limit Texas Hold’em Poker. The 20-day competition involving over 1,20,000 hands was held at Rivers Casino in Pittsburgh in January 2017 where the AI managed to win over $1.8 million in chips using a three fold approach.
The AI Libratus has three main modules. Simply put, the machine learning component discerns strategical flaws and utilises them.
“Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI,” the researchers said on the AI’s success.
Why Use New Technology To Play Games?
While bots to play games like StarCraft have been designed for years, the reliance was on the strategy programmed by the designers, and not real machine learning that developed their strategic abilities. Unlike Go or chess, StarCraft, DOTA 2 and poker are ‘imperfect information games’, where not all moves of the opponents are seen. There is also dependence on memory to remember the positions or moves of the opponents and the gameplay is an anticipatory response. And in the case of games like poker, there are additional elements such as mind games and bluffing.
“This is a step towards building AI systems which accomplish well-defined goals in messy, complicated situations involving real humans,” said OpenAI in their blog.
This could in the future help AI carry out real world tasks such as traffic management, delivery routing and strategic planning. It may also help AI programs in learning situational functions related to health, cybersecurity, military, economy and environment such as management of epidemics or recession and provide ideal suggestions to business and governments.
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
An unapologetic movie buff with a special admiration for Marlon Brando and Stanley Kubrick, Jeevan is a post graduation student in Journalism and Mass Communication. He hopes to make an impact with his uncompromising reportage some day.