Now Reading
Can AI Be A Good Teammate?

Can AI Be A Good Teammate?

  • Recently, researchers have been able to develop a few RL agents that can learn games from scratch through pure self-play without any human input.
AI Good Teammate

The recent advancements in Artificial Intelligence (AI) have been a boon for various applied fields. Artificially intelligent systems today are present everywhere around us, chatbots, for instance, replying back to queries in accordance with the interaction made, processing a result in return. Recently, one of the most popular areas of research in artificial intelligence has been in the field of video games. Challenging yet easy to formalize, this platform can be very well used to develop new AI methods and measure how well they work. Video games can also help demonstrate that machines today are capable of behaviour that is thought to require intelligence without putting human lives or property at risk.

AI systems for video games use a concept known as reinforcement learning (RL), a machine learning training method, creating self-learning algorithm enabled agents as either a co-player or opponent, some of which have even outperformed human players. However, despite their proven record of high individual performance, RL agents can sometimes become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory.

Access Free Data & Analytics Summit Videos>>

The Crux 

The study conducted demanded and involved cooperation between humans and AI agents in the card game called ‘Hanabi’, where it tried to observe if AI is able to demonstrate teaming intelligence, particularly with human teammates. The results showed that most players prefer the traditional and predictable rule-based AI systems over complex RL systems, as the RL-based AI teammates sometimes lacked an appropriate level of quick behaviour and failed to engender certain human reactions, such as trust, mental workload, and risk perception. 

The recent research is mostly being applied to single-player games like the Atari Breakout or adversarial games like StarCraft and Go, where the AI is pitted against a human player or another game-playing bot.

But, researchers have also been able to develop a few RL agents that can learn games from scratch through pure self-play without any human input. Learning through numerous episodes of the gameplay, an RL agent can gradually go from taking random actions to learning sequences of actions that can help it maximize its goal. 

A famous example was DeepMind’s AlphaGo when it matched up against Go world champion Lee Sedol. The analysts first thought that the move was a mistake because it went against the intuitions of human experts. But the result ended up turning the tide in favour of the AI player and defeating Sedol.

In recent years, several research teams have explored the development of AI bots that can play Hanabi. Some of these agents used symbolic AI, where the engineers provided the rules of gameplay beforehand, while others used reinforcement learning.

Where does AI lack?

One key metric of teaming is trust, which players define as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability”.

Potential difficulties with trust include trust calibration, whether one’s trust in an agent is commensurate with its capabilities and trust resolution, which defines whether the range of situations where a human trusts a system is commensurate with its range of capabilities.

In the Hanibi research experiment, the players were exposed to SmartBot and Other-Play but weren’t informed which algorithm worked behind the scenes. According to the surveys from the participants, the more experienced Hanabi players had a poorer experience with the Other-Play RL algorithm in comparison to the rule-based SmartBot agent. 

See Also

Image Source: Hanibi Research Paper

Not only were the scores no better with the AI teammate than with the rule-based agent, but human players consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable and untrustworthy and felt negative even when the team scored well.

Image Source: Hanibi Research Paper

What’s Next? 

Humans hating their AI teammates could be of concern for researchers designing future technologies to one day work with humans on real challenges — like defending from missiles or performing complex surgery. This dynamic, called ‘teaming intelligence’, is the next frontier in AI research.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top