The emerging domain of Embodied AI has proved to be immensely beneficial for situations where the robotic agent needs to be trained to complete tasks by interacting with the environment and making its own observations. Such a capability proved to have a substantial growth in the domain of deep reinforcement learning, computer vision, NLP and robotics. However, it was a real struggle to train a model in one environment and testing it in another; thus researchers at Allen Institute for AI introduced AllenAct, a flexible learning framework designed to focus on the requirements of Embodied AI research.
Designed to provide aid for a growing collection of embodied environments, according to the research paper, AllenACT is open-source and will be available in beta, supporting multiple training environments. With a hope to make Embodied AI more accessible for new research, it comes with pre-trained models, extensive documentation, tutorials and start-up codes.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Overview Of AllenACT Framework
The advent of Embodied AI has led to significant developments in simulated environments involving robot-object interaction. This also includes robots having visual navigation, answering questions, completing tasks, following necessary instructions, predicting future, grasping as well as multi-agent collaboration.
However, with that being said, this emergence has led to several challenges like replication across tasks and datasets, unravelling components of system matter, ramping uptime, enormous training cost, etc. And that’s where AllenACT comes into the picture, written in Python using PyTorch, with an extreme focus on modularity. According to researchers, while there are several open-source reinforcement learning libraries and frameworks, each of them lacks necessary features for Embodied AI research.
According to researchers developing software for artificial intelligence requires a balance between the ability to add up new functions along with the ability to exploit the existing functionality. With traditional framework design process, it becomes difficult to carry out Embodied AI research; however, for such research, the frequency is way too much to carry out experiments for the software. And that’s why for AllenACT, researchers focused on modularity and flexibility, allowing users to change hyperparameters and model architectures to add new training.
Alongside, training sophisticated robots requires a structured pipeline approach where the robotic agents are first trained on imitation learning, which is then followed by reinforcement learning for further modification. AllenACT, whereas, put the concept of the training pipeline within the core of the framework with a collection of sequential pipeline stages. These pipeline stages define the losses to be used and length of training. While training, AllenACT runs through these stages and updates the robot accordingly.
Features Of AllenACT
AllenACT supports diverse embodied environments and tasks, including Habitat, iTHOR and RoboTHOR, and functions within them. Along with this, it supports MiniGrid, which is used for fast sand-box for algorithm development. Researchers are also planning to scale the same for SAPIEN and ThreeDWorld environments for robotic manipulation.
Alongside AllenACT supports a variety of decentralised, distributed, synchronous, on-policy algorithms, including DD-PPO, DD-A2C, PPO, imitation learning, and DAgger and offline training. It also enables training with a fixed and external dataset, which helps in supervising the robotic agents.
In order to support more Embodied AI research, the AllenACT can also be leveraged to train multi-agent systems. Moreover, to lower the burden of visualisation, the framework had been equipped with a number of utilities for the environment with excellent support. According to researchers, the scope of these visualisation utilities can augment embodied environments and tasks incorporated in the framework.
A simple plug-in-based interface. The interface allows in generating different types of visualisations in Tensorboard for a specific task.
AllenACT also comes with ready-made tutorials for training a PointNAV model in RoboTHOR and switching environments while experimenting. Besides, to encourage reproducibility, it includes several pre-trained models for various tasks. Also, it combines training with multiple losses using a TRAININGPIPELINE.
Lastly, AllenACT provides extraordinary support to easily visualize first and third-person view of the agents and intermediate model tensors, which integrates these into Tensorboard.
Open-sourcing of AllenACT by the Allen Institute for AI has been aimed towards addressing the challenges Embodied AI research has been facing amid this pandemic. Not only does it provide a high degree of support for reinforcement learning tasks, but also encourages reusable research in the Embodied AI domain.
Read the whole paper here.