Listen to this story
In the video game adventure film ‘The Wizard’ (1989), Luke Edwards’ character Jimmy Woods discovers his innate gaming ability by taking part in a video game competition. He wins a grand prize of $50,000—after attaining the high score in Super Mario Bros. 3—all because his ability was underestimated.
As the case lies at the gamers’ front, developers—while making any new game—are assigned with balancing the difficulty to serve a wide community of players. A game too easy or difficult can risk alienating a large subset of the community. Players are usually allowed to choose a gaming experience which works best for them. Some may struggle with selecting the right ones, making the game too challenging to ace or put off playing altogether.
Game developers in today’s $200 billion gaming industry have relentlessly searched for new ways to closely monitor player engagement and behaviour in the marketplace. But a new breakthrough in AI might have led ‘manual difficulty selection’ closer to the digital grave.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Evaluating levels by ‘Dynamic Difficulty Adjustment (DDA)’
A group of scientists at Gwangju Institute of Science and Technology (GIST) have now developed a new technology that adjusts the difficulty of video games by estimating players’ emotions in real time. The paper titled, ‘Diversifying dynamic difficulty adjustment agent by integrating player state models into Monte-Carlo tree search’ published in ‘Expert Systems With Applications’ consists of a dynamic model that tweaks the difficulty level to maximise player satisfaction.
Dynamic difficulty adjustment (DDA) is a technique used to adaptively alter a game to make it easier or harder. One of the common ways to achieve DDA is through heuristic intervention and prediction that adjusts game difficulty once undesirable player states (such as boredom or frustration) are observed.
One of the earliest examples of implementing DDA was in a shooter game called Resident Evil 4 in 2005, which employed a system called the ‘Difficulty Scale’. This was unknown to most players, as it was only mentioned in the Official Strategy Guide. The system was used to grade the player’s performance on a number scale from 1 to 10—adjusting both enemy attacks and resistance based on the user’s performance.
Until now, developers have majorly relied on ‘Dynamic Difficulty Adjustment (DDA)’ to crack the tough nut of solving the problem of balancing difficulty of a video game, which is thought to be necessary to give players a pleasurable experience. Though useful, the strategy is limited by taking only the player’s performance into account, excluding the fun factor.
Twist to DDA Approach
Rather than focusing on the player’s performance, the team at GIST developed ‘DDA agents’ that would adjust the game’s difficulty, maximising one of four different aspects that relates to a player’s experience: challenge, competence, flow, and valence. These DDA agents were then trained via Machine Learning (ML) using data gathered by human players, who played a game against various artificial intelligences (AIs). The players were then asked to answer a questionnaire about their experience.
Talking exclusively to Analytics India Magazine, the corresponding author of the study JaeYoung Moon, PhD student at GIST said, “There were two motivations that I started conducting this study. First, the prior dynamic difficulty adjustment (DDA) research had tended to adjust the difficulty to aim for a 50:50 win rate. But I saw some people enjoy difficult games but some want easy games depending on the player’s skill. So, I thought that it is not enough to provide a 50:50 game to every player. Second, I therefore tried to target encouraging the players’ internal states (e.g., enjoyment) by DDA. But, enjoyment is too subjective to measure in a single way. So, we targeted four different internal states (challenge, competence, valence, and flow) to measure enjoyment and diversify strategies of the game AI to encourage each state.”
The model was based on the algorithm of Monte-Carlo Tree Search (MCTS) where each DDA agent employed actual and simulated game data to tune the opposing AI’s fighting style. This is done in a way that maximises a specific emotion or ‘affective state’.
Associate Professor Kyung-Joong Kim, GIST says, “Once trained, our model can estimate player states using in-game features only”.
Monte Carlo Tree Search is a decision-making algorithm that consists in searching combinatorial spaces represented by trees. The algorithm consists of nodes which denote states (configurations of the problem), whereas edges signify transitions (actions) from one state to another. The algorithm was originally proposed in the work by Kocsis and Szepesvári (2006) as an algorithm for making computer players in Go.
Through an experiment of 20 volunteers, the team verified that the proposed DDA agents could produce AIs that improved the players’ overall experience, apart from their preference. Thus, the research marks the first time ‘affective states’ have been incorporated directly into DDA agents—proving a catalyst for commercial games.
To ‘Gamify’ other fields
Users are rapidly looking for ways to feel connected within a game as immersion and realism takes the centre stage in gaming. One of the famous tools that many game creators are employing—to foster engagement, produce new content and interactive narratives—is AI.
“Our work can help the users of the education services to stay encouraged and motivated for the tasks they are learning. For example, if users learn harder tasks than their proficiency, they may want to quit the training task in the middle. Or, conversely, if users learn too easy tasks compared to their proficiency, they may be bored and lose their focus. To alleviate this problem, our approach will help the services to adjust the difficulty of the task so that the users can maintain their motivation and concentration for the jobs. In health care applications, it can be employed in similar ways. Our work can be applied in a way that provides adequate healthcare programs that patients can afford in their current states”, says JaeYoung Moon.
Developers will now be able to improve gameplay, find new revenue streams by gathering user data. Associate Professor Kim, GIST remarked, “Commercial game companies already have huge amounts of player data. They can use these data to model the players and solve various issues related to game balancing using our approach”. The improvised approach has potential for other fields that can be ‘gamified’, to fields such as healthcare, exercise, and education.
New innovations like these are relieving designers of the burden of all tasks at once—from watching in-game behaviour to checking engagement between players—along with AI adding value to deepen human understanding in video games. Model implementation might take time. It may not cost as much as a Hollywood blockbuster, but would surely look like one!