Games have become a go-to testbed for researchers, where the benchmark of human intellect can be put to test in a somewhat reliable way.
Sign up for your weekly dose of what's up in emerging technology.
AI had been, in the past, pitted against humans in games of chess, poker, GO and more. These standard methods usually require years of game time to attain human performance in complex games such as Go and StarCraft.
Today AI researchers are considering a new contender in the form of Minecraft to investigate intelligence in AI agents.
Minecraft was created by Swedish developer Markus Persson and released by Mojang in 2011. Post-release, it turned out to be the single best-selling video game of all time, selling over 180 million copies across all platforms.
Minecraft, like legos, is only as good as the creativity of the users. Atoms can combine to form agile dolphins or immovable wooden stumps. In either case, even though the fundamental substance is the same, the formations and other permutation and combination of complex systems lead to something inexplicable yet useful. Intelligence, too, is one kind of emergence in a complex system that is barely understood.
In Minecraft, players explore an intentionally blocky, pixelated procedurally-generated 3D world, and try to discover and extract raw materials and build things depending on the game mode. Players can also team up with or compete against other players in the same world.
What Does Minecraft Have To Do With Intelligence?
Though intelligence in the context of AI popularly means making predictions by employing diligent data-crunching algorithms, intelligence in its truest sense has never been achieved. The implementation has so far, been restricted to sci-fi novels.
One way to make the machine more intelligent is by teaching it a few things and having it replicate the same in a newer setting or making it do something with a piece of information which it hasn’t been trained for.
Reinforcement learning platforms like OpenAI and DeepMind deals with similar experiments.
Minecraft is a good virtual training ground where the players of the game can display their various intelligent behaviours. Players also get to learn Minecraft’s version of physics, as well as discover recipes to transform materials into resources or tools and even a working computer inside the game.
It has become one such game where the game is rigged in favour of the creativity in a player. An innovative gamer is more likely to enjoy this game.
In order to exploit the many opportunities that Minecraft has to offer to the realm of imitation learning, the researchers from Microsoft and Carnegie Mellon University have joined hands to host a competition called MineRL.
Researchers believe that this contest will have an impact beyond locating Minecraftgems, by inspiring coders to push the limits of imitation learning.
Why MineRL Is A Big Deal
The coding event, known as the MineRL (pronounced ‘mineral’), is aimed at contestants to use this technique to teach AI to play the game.
“Exploration is really, really difficult,” says William Guss, head of the MineRL Competition’s organizing team.
MineRL is designed to uncover new strategies in imitation learning, which is thought to have an edge over other popular technique — reinforcement learning.
Participants are tasked with developing a system to obtain a diamond in Minecraft using only four days of training time.
To facilitate solving this hard task with few samples, a dataset of human demonstrations was also provided.
“Imitation learning gives you a good prior about your environment.”
A massive 60 million examples of actions are ultimately captured, which could be taken in a given situation and the teams were given 1000 hours of recorded behaviour for training. These recordings represent one of the first and largest data sets devoted specifically to imitation-learning research.
Imitation learning, posit the researchers, can improve the efficiency of the learning process, by mimicking how humans or even other AI algorithms tackle the task.
Reinforcement-learning techniques wouldn’t stand a chance in this competition on their own, says Guss.
Working at random, an AI might succeed only in chopping down a tree or two in the eight-million-step limit of the competition — and that is just one of the prerequisites for creating an iron pickaxe to mine diamonds in the game.
Such research could ultimately help to train AI so that it can interact better with humans in a wide range of situations, as well as navigate environments that are filled with uncertainty and complexity.
The organisers of MineRL contest urge the need to explore something beyond what reinforcement learning platforms can do. Especially to achieve the very ambiguous and elusive human like intelligence in machines, they believe imitation learning to be a good contender.