“Researchers seek new and more stimulating environments to tackle RL challenges with AndroidEnv.”
Alphabet Inc.’s DeepMind has recently introduced AndroidEnv, an open-source platform for Reinforcement Learning (RL) built on top of the Android ecosystem. According to the team, AndroidEnv facilitates RL agents to interact with a wide variety of apps and services commonly used by humans on a widely used touchscreen interface.
Introducing AndroidEnv, an open-ended platform for training agents on Android apps and games. With a universal touchscreen interface, access to the entire OS, and a number of ready-to-use tasks, AndroidEnv is a promising domain for RL research: https://t.co/MZIYOwxlnT pic.twitter.com/godk9IEKx5
— DeepMind (@DeepMind) May 28, 2021
AndroidEnv has a universal touchscreen interface that enables the empirical evaluation of general purpose RL algorithms designed to tackle a wide variety of tasks. The agent-environment interaction in AndroidEnv matches ‘a user and a real device’: the screen pixels constitute the observations, the action space is defined by touchscreen gestures, the interaction is real-time, and actions are executed asynchronously, while the environment runs at its own time scale. With these features, agent performance can be realistically compared to humans. Moreover, environments that behave as closely as possible to their real-world counterparts also facilitate production deployment, without added work to adapt to different interfaces or data distributions.
Most of the sub-domains of AI, RL especially, suffer from lack of real-world applications. Even if a use case presents itself as a suitable avenue for these algorithms, the lack of experimental data makes their usage questionable. With around two billion Android devices in use, DeepMind looks to make their RL research more robust and real. “The sheer number of applications, built for a multitude of important aspects of human life, ranging from education and business to communication and entertainment, provides virtually unlimited challenges for RL research,” explained the team behind AndroidEnv.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
An agent in AndroidEnv makes decisions based on images displayed on the screen, and makes moves mimicking the touchscreen actions. The screens of smartphones can now be turned into a playground for the RL agents, which try to mimic actions of human gestures like swiping and typing to book a cab or play chess. The 2D screen opens up infinite possibilities for the agent and hence makes it more challenging. The agent has to consider the types of clicks a certain application expects, pixel change, spatial correlation. The agent should know what to do with a drop-down button.
Download our Mobile App
How RL agents can be deployed on smartphones:
- Initialise the environment by installing particular applications on the device.
- Reset an episode upon receiving a particular message from the device or app, or upon reaching a certain time limit.
- Once an episode is triggered, launch a given app and clear the cache.
- Find rewards by considering log messages implemented in applications.
DeepMind’s AndroidEnv in a way is similar to what OpenAI tried five years ago with Universe. Universe is a software platform designed to measure and train RL agents through games, websites and other applications on the screen. Thanks to AI, today, computers can now see, hear, and translate languages with unprecedented accuracy. However, these systems still are categorised as “Narrow AI” — they lack the ability to do anything sensible outside of the domain they are trained in. In a standard training regime, wrote the OpenAI team back in 2016, the agents are initialised from scratch and are made to run randomly through millions of trials to repeat actions that can fetch them rewards. For generally intelligent agents to flourish they must experience a wide repertoire of tasks so they can develop problem solving strategies that can be efficiently reused in a new task.
According to Andrej Karpathy, Director of AI at Tesla, automation in the software realm (“world of bits”) is still a relatively overlooked AI dev platform. Karpathy predicts that incorporating RL into real world environments such as Android devices can lead to AIs speaking to each other (via audio) in English, or using UI/UX interfaces originally built for humans in both software or hardware. “Seems quite likely that AIs of the future operate on “human native” interfaces instead of purpose-built APIs despite the ridiculous inefficiency,” tweeted Karpathy.
DeepMind has open-sourced AndroidEnv in the form of a Python library designed to provide a flexible platform for defining custom tasks on top of the Android Operating System, including any Android application. To experiment with AndroidEnv, you need to install Android Studio and then set up an Android Virtual Device(AVD).
Get started with AndroidEnv here.