Can AI Be A Good Teammate?

Recently, researchers have been able to develop a few RL agents that can learn games from scratch through pure self-play without any human input.
AI Good Teammate

The recent advancements in Artificial Intelligence (AI) have been a boon for various applied fields. Artificially intelligent systems today are present everywhere around us, chatbots, for instance, replying back to queries in accordance with the interaction made, processing a result in return. Recently, one of the most popular areas of research in artificial intelligence has been in the field of video games. Challenging yet easy to formalize, this platform can be very well used to develop new AI methods and measure how well they work. Video games can also help demonstrate that machines today are capable of behaviour that is thought to require intelligence without putting human lives or property at risk.

AI systems for video games use a concept known as reinforcement learning (RL), a machine learning training method, creating self-learning algorithm enabled agents as either a co-player or opponent, some of which have even outperformed human players. However, despite their proven record of high individual performance, RL agents can sometimes become frustrating teammates when paired with human players, according to a study by AI researchers at MIT Lincoln Laboratory.

The Crux 

The study conducted demanded and involved cooperation between humans and AI agents in the card game called ‘Hanabi’, where it tried to observe if AI is able to demonstrate teaming intelligence, particularly with human teammates. The results showed that most players prefer the traditional and predictable rule-based AI systems over complex RL systems, as the RL-based AI teammates sometimes lacked an appropriate level of quick behaviour and failed to engender certain human reactions, such as trust, mental workload, and risk perception. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

The recent research is mostly being applied to single-player games like the Atari Breakout or adversarial games like StarCraft and Go, where the AI is pitted against a human player or another game-playing bot.




But, researchers have also been able to develop a few RL agents that can learn games from scratch through pure self-play without any human input. Learning through numerous episodes of the gameplay, an RL agent can gradually go from taking random actions to learning sequences of actions that can help it maximize its goal. 

A famous example was DeepMind’s AlphaGo when it matched up against Go world champion Lee Sedol. The analysts first thought that the move was a mistake because it went against the intuitions of human experts. But the result ended up turning the tide in favour of the AI player and defeating Sedol.

In recent years, several research teams have explored the development of AI bots that can play Hanabi. Some of these agents used symbolic AI, where the engineers provided the rules of gameplay beforehand, while others used reinforcement learning.

Where does AI lack?

One key metric of teaming is trust, which players define as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability”.

Potential difficulties with trust include trust calibration, whether one’s trust in an agent is commensurate with its capabilities and trust resolution, which defines whether the range of situations where a human trusts a system is commensurate with its range of capabilities.

In the Hanibi research experiment, the players were exposed to SmartBot and Other-Play but weren’t informed which algorithm worked behind the scenes. According to the surveys from the participants, the more experienced Hanabi players had a poorer experience with the Other-Play RL algorithm in comparison to the rule-based SmartBot agent. 

Image Source: Hanibi Research Paper

Not only were the scores no better with the AI teammate than with the rule-based agent, but human players consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable and untrustworthy and felt negative even when the team scored well.

Image Source: Hanibi Research Paper

What’s Next? 

Humans hating their AI teammates could be of concern for researchers designing future technologies to one day work with humans on real challenges — like defending from missiles or performing complex surgery. This dynamic, called ‘teaming intelligence’, is the next frontier in AI research.

Victor Dey
Victor is an aspiring Data Scientist & is a Master of Science in Data Science & Big Data Analytics. He is a Researcher, a Data Science Influencer and also an Ex-University Football Player. A keen learner of new developments in Data Science and Artificial Intelligence, he is committed to growing the Data Science community.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.