On March 16, French AI startup NukkAI claimed on Twitter that in the following week, they would host a competition where the research firm would beat eight Bridge world champions. Bridge, unlike Chess or Go, is a more complicated game that involves cooperation and even covert signalling between players. It isn’t considered a game in which AI would improve upon a human’s performance considerably. In Bridge, opponents aren’t aware of the cards that each of them holds, while, in Chess, opponents can make their strategies after observing the other’s move. So much so that co-founder of Microsoft and avid bridge player Bill Gates once said that Bridge would be one of the last games where the computer couldn’t better the human. Last week, NukkAI’s NooK proved this false.
NukkAI’s algorithm, called NooK, won 67 or 83 per cent of the 80 sets. NooK is a hybrid algorithm that combines deep learning methods along with symbolic AI methods. While deep learning is a technique through which interconnected neural nets teach themselves how to play after repeated rounds of self-evaluation, symbolic methods have defined rules. The more complex a game becomes, the greater the number of moves and the harder for symbolic AI methods to win. This impediment was resolved by DeepMind’s AlphaGo when it used deep learning to defeat Go champion Lee Sadol in 2016. On the other hand, symbolic AI, too, can mitigate certain limitations to deep learning.
Sign up for your weekly dose of what's up in emerging technology.
Deep learning has low explainability, meaning that the process through which the neural network arrives at the outcome is a mystery. This is why deep learning represents a ‘black box.’ In contrast, a neurosymbolic approach has high explainability. Here, the algorithm, in this case, NooK, learns the rules of the game before it starts practising playing to improve itself. This is not unlike a human who acquaints themselves with the background of a game before they start playing it. This was why NooK could explain its decisions, becoming a representative of a ‘white box.’
Explainability has become a buzzword in AI with valid reasoning. The higher the explainability of the algorithm, the easier one can trust AI in sensitive industries like, say, self-driving technology. The more provable an AI model’s correctness is, the more responsible the model is for deployment into production. Bridge, a game in which communication strategy is key, needs explainability.
According to the study ‘Neuro-Symbolic Artificial Intelligence,’ which was published in 2021, the ideal mix in a neuro-symbolic method would be where the neural nets retained their trainability and effectiveness to work despite flawed datasets, while the symbolic AI retained its high explainability and the ease with which it employed human expertise into its design.
NukkAI had been researching on the subject for some time and had released a statistical approach called Probabilistic Logic Programming to train their models to win at Bridge. The team also applied another technique which was an alternative to the Monte Carlo “search” technique, to help NooK determine what the best options would be for their next move.
The news drove attention to symbolic AI—with followers of neuro-symbolic AI again calling for a future in AI—that applies both the new or deep learning methods and the old or symbolic AI methods.
Still, NooK had certain limitations. The game was simplified by excluding bidding from the beginning. Traditionally, a game of Bridge begins with all the players betting on the number of rounds that they think it would take them to win. The person who sets the highest bid is known as the declarer or the dummy, and the highest bid is known as the contract. In this particular game, both NooK and the champions were regarded as declarers. The challenge required the champions to play 800 consecutive deals divided into 80 sets of 10.
These opponents were the deemed robot champions at Bridge around the world. However, it is also common knowledge that robot champions aren’t close to being as good as human players. Due to these drawbacks, naysayers like Noam Brown, a research scientist with Meta AI, also weighed in on the debate and undermined the results since robots aren’t as flexible as human beings. Besides this, the bidding before a game includes communication, which is usually one of the trickiest and potentially deceptive parts of the game.