The researchers at OpenAI have trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using only a small amount of labeled contractor data. The team gathered a small dataset of contractors playing Minecraft, and recorded their video, and the actions they took, such as keypresses and mouse movements. Using this data, they trained an inverse dynamics model (IDM), which predicts the action being taken at each step in the video.
The IDM can use past and future information to guess the action at each step. This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only, which requires inferring what the person wants to do and how to accomplish it. The team then trained IDM to label a much larger dataset of online videos and learn to act via behavioral cloning.
Sign up for your weekly dose of what's up in emerging technology.
OpenAI chose to validate the method in Minecraft because it (1) is one of the most actively played video games in the world and thus has a wealth of freely available video data and (2) is open-ended with a wide variety of things to do, similar to real-world applications such as computer usage.
Trained on 70,000 hours of IDM-labeled online video, the behavioral cloning model (the “VPT foundation model”) accomplished tasks in Minecraft that are nearly impossible to achieve with reinforcement learning from scratch. It learned to chop down trees to collect logs, craft those logs into planks, and then craft those planks into a crafting table; this sequence takes a human proficient in Minecraft approximately 50 seconds or 1,000 consecutive game actions.
According to OpenAI, it is far more effective to use labeled contractor data to train an IDM (as part of the VPT pipeline) than it is to directly train a BC foundation model from that same small contractor dataset. To validate this hypothesis, the team trained foundation models on increasing amounts of data from 1 to 70,000 hours. Those trained on under 2,000 hours of data are trained on the contractor data with ground-truth labels that were originally collected to train the IDM, and those trained on over 2,000 hours are trained on internet data labeled with our IDM. The team then took each foundation model and fine-tuned it to the house building dataset described in the previous section.
VPT paves the path toward allowing agents to learn to act by watching the vast numbers of videos on the internet. Compared to generative video modeling or contrastive methods that would only yield representational priors, VPT offers the exciting possibility of directly learning large scale behavioral priors in more domains than just language. While we only experiment in Minecraft, the game is very open-ended and the native human interface (mouse and keyboard) is very generic, so we believe our results bode well for other similar domains, e.g. computer usage.
Check out the whole story here