Now Reading
Reinforcement Learning Comes To Android Phones

Reinforcement Learning Comes To Android Phones

  • AndroidEnv has a universal touchscreen interface that enables the empirical evaluation of general purpose RL algorithms designed to tackle a wide variety of tasks

“Researchers seek new and more stimulating environments to tackle RL challenges with AndroidEnv.”

Alphabet Inc.’s DeepMind has recently introduced AndroidEnv, an open-source platform for Reinforcement Learning (RL) built on top of the Android ecosystem. According to the team, AndroidEnv facilitates RL agents to interact with a wide variety of apps and services commonly used by humans on a widely used touchscreen interface.

AndroidEnv has a universal touchscreen interface that enables the empirical evaluation of general purpose RL algorithms designed to tackle a wide variety of tasks. The agent-environment interaction in AndroidEnv matches ‘a user and a real device’: the screen pixels constitute the observations, the action space is defined by touchscreen gestures, the interaction is real-time, and actions are executed asynchronously, while the environment runs at its own time scale. With these features, agent performance can be realistically compared to humans. Moreover, environments that behave as closely as possible to their real-world counterparts also facilitate production deployment, without added work to adapt to different interfaces or data distributions.

Most of the sub-domains of AI, RL especially, suffer from lack of real-world applications. Even if a use case presents itself as a suitable avenue for these algorithms, the lack of experimental data makes their usage questionable. With around two billion Android devices in use, DeepMind looks to make their RL research more robust and real. “The sheer number of applications, built for a multitude of important aspects of human life, ranging from education and business to communication and entertainment, provides virtually unlimited challenges for RL research,” explained the team behind AndroidEnv.

Image credits: DeepMind

An agent in AndroidEnv makes decisions based on images displayed on the screen, and makes moves mimicking the touchscreen actions. The screens of smartphones can now be turned into a playground for the RL agents, which try to mimic actions of human gestures like swiping and typing to book a cab or play chess. The 2D screen opens up infinite possibilities for the agent and hence makes it more challenging. The agent has to consider the types of clicks a certain application expects, pixel change, spatial correlation. The agent should know what to do with a drop-down button. 

How RL agents can be deployed on smartphones:

See Also

  • Initialise the environment by installing particular applications on the device. 
  • Reset an episode upon receiving a particular message from the device or app, or upon reaching a certain time limit. 
  • Once an episode is triggered, launch a given app and clear the cache.
  • Find rewards by considering log messages implemented in applications.

DeepMind’s AndroidEnv in a way is similar to what OpenAI tried five years ago with Universe. Universe is a software platform designed to measure and train RL agents through games, websites and other applications on the screen. Thanks to AI, today, computers can now see, hear, and translate languages with unprecedented accuracy. However, these systems still are categorised as “Narrow AI” — they lack the ability to do anything sensible outside of the domain they are trained in. In a standard training regime, wrote the OpenAI team back in 2016, the agents are initialised from scratch and are made to run randomly through millions of trials to repeat actions that can fetch them rewards. For generally intelligent agents to flourish they must experience a wide repertoire of tasks so they can develop problem solving strategies that can be efficiently reused in a new task.

According to Andrej Karpathy, Director of AI at Tesla, automation in the software realm (“world of bits”) is still a relatively overlooked AI dev platform. Karpathy predicts that incorporating RL into real world environments such as Android devices can lead to AIs speaking to each other (via audio) in English, or using UI/UX interfaces originally built for humans in both software or hardware. “Seems quite likely that AIs of the future operate on “human native” interfaces instead of purpose-built APIs despite the ridiculous inefficiency,” tweeted Karpathy. 

DeepMind has open-sourced AndroidEnv in the form of a Python library designed to provide a flexible platform for defining custom tasks on top of the Android Operating System, including any Android application. To experiment with AndroidEnv, you need to install Android Studio and then set up an Android Virtual Device(AVD). 

Get started with AndroidEnv here.

What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top