GPT-3 was released only on an invite basis initially; a lot of programmers and engineers who got this early access developed a lot of interesting demos. AI/ML expert Shameed Sait, developed something that Gen-Zers call the fear of missing out (FOMO). This feeling, however, did not last too long. By the end of 2021, GPT-3 was made publicly accessible, and Sait hasn’t looked back since. A few interesting personal projects later, Sait had a eureka moment. An avid chess player, Sait thought to himself – what if one teaches GPT-3 how to play Chess?
Sait is a seasoned technologist with more than 17 years of expertise, including eight years in machine learning and data science. In the field of artificial intelligence, he has 17 patents and multiple publications to his credit. He presently works as the Head of Artificial Intelligence at TMRW (a GEMS Education company) where he is responsible for the development of cutting-edge AI Edtech solutions.
Gamification is considered to be one of the best methods for testing the performance of a tool or software. Games like Chess, Poker or Go are always considered for such ‘testing’ (think AlphaGo and IBM’s Deep Blue). Sait took the same route. But, as Sait mentioned, teaching GPT-3 (he used the Da Vinci model) to play Chess was different from other AI-based Chess-playing systems, given that GPT-3 is a language model. Sait then came up with an ingenious solution to this challenge – he framed playing Chess as a text generation problem.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
GPT-3 as a chess player
Each square in the chessboard is denoted by a letter and a number – horizontal tiles are labelled ‘a’ to ‘h’, and the vertical ones are labelled 1 to 8. To construct Chess as a text generation problem, Sait used Portable Game Notation. He said, “Chess games are recorded as a series of notations called PGN. I used PGN to frame playing Chess as a text generation problem. I fed in the input in the PGN and asked the system to predict the next move.”
Sait further said that the GPT-3 model played extremely well in the ‘opening moves’. He said that its superior performance at the beginning of the game can be attributed to how GPT-3 is trained. “There are a handful of textbook moves which are generally played at the beginning of every chess game. Since GPT-3 is trained on a large corpus of web-based text, it is quite possible that this must have also included some Chess data,” he explained.
The problem starts from the middle game onwards. “Unlike the opening, moves in the subsequent part of the game are not predefined. With every move, the number of permutations and combinations for the next move grows exponentially. It is famously believed that the total number of possible moves on a chessboard is equal to the number of atoms in the entire world! It is highly unlikely that GPT-3 must have been trained on such a huge set of moves. Hence the quality of the game played by the GPT-3 system deteriorates from thereon; it starts making some rookie mistakes, losing some of the pieces, and the next thing you know is that it becomes vulnerable to a checkmate,” said Sait. In some cases, Sait observed that GPT-3 also started making illegal moves.
He said that when compared to other Chess-playing AI systems, GPT-3 is very different. In the case of the former, the systems usually play at their usual capabilities, and once in a while, they choose a bad move instead. They, in a way, learn from their mistakes. GPT-3, on the other hand, is unlike any other system – its performance gradually declines plainly because it has not seen such data.
When asked if GPT-3 is mimicking the data (moves in this case) that it has learnt, Sait shared an interesting perspective. “So what GPT-3 does is definitely not intelligence; it can not think and, in this case, not even aware that it is playing a game. That said, I don’t agree that it is mimicking the data it is trained on. I would say that it is creative that way; it tries to innovate based on the data it is trained on,” said Sait.
GPT-3 for WhatsApp texts
Sait has taken fancy to GPT-3, which has led him to carry out several other projects. He tells us about the time he used GPT-3 to predict WhatsApp texts between two of his friends. “We have a friends’ WhatsApp group where we sometimes indulge in political debates. For example, the Trump vs Biden debate. I thought it would be fun using GPT-3 to predict a chat between two of the group members – one supporting Trump and the other supporting Biden. I trained the system with some of the WhatsApp chats and tried predicting the discussion. The result was really astonishing. GPT-3 was able to predict the conversation, even including some of the subtle nuances,” said Sait.
He went on to post the conversation on the group, and needless to say, his friends were surprised and impressed in equal measure.
In concluding the discussion, Sait also talks about the ethical challenges that GPT-3 might develop. He calls the researchers’ team working on this system to be wary of these issues going forward.