MITB Banner

Timeline Of Games Mastered By Artificial Intelligence

Share

One of the earliest experiments to have the Turing test is to make AI learn and master games played by humans. Games form the perfect test-bed for AI skills. Today, an AI called Pluribus has thoroughly learnt to play a game of Poker and we can now say that AI has conquered the game. Here is a look at the timeline of all the games that AI have mastered so far.

1951: The first working AI programs were written by the University of Manchester. This computer program which was run on the Ferranti Mark 1 machine had learnt how to play Checkers and Chess. 

1952: Arthur Samuel of IBM began working towards the making of the first game-playing program that was capable of competing against human players in the game of Checkers.

1955: Arthur Samuel wrote the program version that learnt to play two years later.

1990: IBM program to learn Backgammon was written by Gerald Tesauro called as TD-Gammon was dedicated to demonstrating the power of reinforcement learning. It showed that an AI with capabilities to compete in a championship-level backgammon game could be built with the technology and the computer programming skills that we had during that time. 

1994: A computer programme called Chinook played Checkers. It beat the second-highest rated player Don Lafferty and won the National Tournament of the game in the US with a big margin.

1997: IBM invented the Deep Blue machine which had learnt to play the game of Chess and defeated Garry Kasparov, the world champion of those times. This was the first time that an AI had defeated a world champion in a match play. 

Garry Kasparov playing chess against the IBM Deep Blue.

2007: Checkers was solved by sifting through 500 billion checkers positions by a computer program. This program could not be beaten by human players.

2011: IBM’s supercomputer Watson having NLP capabilities had mastered the game of Jeopardy. It competed against two champions of the game at the time. After three matches, Watson was able to win $77,147 in prizes while the other two human opponents had collected $24,000 and $21,600. 

2014: Google started to work on a deep learning neural network called AlphaGo, which after some years was able to compete and even beat the world champions of the game. 

2015: AI mastered not just the board games where the winning could be mathematically calculated and is restricted, but also in real-time strategy games like DotA 2 Elon Musk co-owned firm called OpenAI using reinforcement learning was trying to make an AI capable of playing DotA. The same year, Google DeepMind’s AlphaGo defeated 3 time European Go champion by 5 games to 0.

2016: Deepmind’s AI proved its skills with the board game Go, difficult board games that has the most number of possible moves. Lee Sedol, the Go world champion by this AI. After observing thousands of games and after playing hundreds of them against, the AI had learnt to play the game.

Lee Sedol playing Go against AlphaGo

2017: OpenAI competed in The International, which is the world’s biggest DotA 2 tournament, against professional DotA 2 players. The AI was trained on a one-player version of the game which is rather simpler than the team version. 

Researchers from Carnegie Mellon University (CMU) made an AI system called Libratus that played against 4 expert players of Texas Hold ‘em Poker, a poker game. The AI had spent 20 days learning. The tournament lasted for 20 days and spanned over 120,000 hands of poker. Libratus improved itself with every game, also improving its strategy. Libratus individually defeated each of its 4 human opponents which were world champions at the time, by a huge margin.

(R) Professional poker player Jason Les plays Texas Hold’em Poker with Libratus. (L) Computer scientist Tuomas Sandholm, one of the bot’s creators.

DeepMind came with an even more improved called AlphaGo Zero. The game had zero involvement from humans, which means that it did not even have to learn from watching humans play or by playing with them, anymore. It played against itself a million times to master the board game. In this kind of AI, the AI is only given the basic rules and made to learn itself.  AlphaZero masters chess in 4 hours and defeating the best chess engine, StockFish 8. It won 28 out of 100 games, and the rest was a draw.

University of Alberta’s DeepStack showcased an AI that could dominate Poker players using an artificially intelligent form of intuition.

Deep learning startup acquired by Google called Maluuba had an ML method called the Hybrid Reward Architecture (HRA). Using this method, it created more than 150 individual agents with different tasks applying it to Ms Pac-Man. With its method, the AI learnt how to obtain a top score of 999,990, impossible for any human to achieve.

2018: The OpenAI AI competed another time with players at The International against an AI that had learnt to play in a team of five. It won during a 1v1 demonstration game against professional Dota 2 player and champion, Dendi. Although the AI could not win the final round, it portrayed an exceptional level of capabilities.

2019: An AI called Pluribus that competed and won against professionals at Texas Hold’em. This Poker AI was capable of calculating several moves ahead and making a decision based on it. It could also learn strategies not adopted by human players.

Future Of Games In AI 

With the advancement of games in AI, it is clear that the technology is capable enough to beat human champions in several games. It is today not only restricted to board games but also strategy games like DotA and Poker. It is because of these capabilities that we know that the prediction about AI changing the world by 2050 is indeed going to be true.

Share
Picture of Harshajit Sarmah

Harshajit Sarmah

Harshajit is a writer / blogger / vlogger. A passionate music lover whose talents range from dance to video making to cooking. Football runs in his blood. Like literally! He is also a self-proclaimed technician and likes repairing and fixing stuff. When he is not writing or making videos, you can find him reading books/blogs or watching videos that motivate him or teaches him new things.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.