MITB Banner

Reinforcement Learning Comes To Android Phones

Share

“Researchers seek new and more stimulating environments to tackle RL challenges with AndroidEnv.”

Alphabet Inc.’s DeepMind has recently introduced AndroidEnv, an open-source platform for Reinforcement Learning (RL) built on top of the Android ecosystem. According to the team, AndroidEnv facilitates RL agents to interact with a wide variety of apps and services commonly used by humans on a widely used touchscreen interface.

AndroidEnv has a universal touchscreen interface that enables the empirical evaluation of general purpose RL algorithms designed to tackle a wide variety of tasks. The agent-environment interaction in AndroidEnv matches ‘a user and a real device’: the screen pixels constitute the observations, the action space is defined by touchscreen gestures, the interaction is real-time, and actions are executed asynchronously, while the environment runs at its own time scale. With these features, agent performance can be realistically compared to humans. Moreover, environments that behave as closely as possible to their real-world counterparts also facilitate production deployment, without added work to adapt to different interfaces or data distributions.

Most of the sub-domains of AI, RL especially, suffer from lack of real-world applications. Even if a use case presents itself as a suitable avenue for these algorithms, the lack of experimental data makes their usage questionable. With around two billion Android devices in use, DeepMind looks to make their RL research more robust and real. “The sheer number of applications, built for a multitude of important aspects of human life, ranging from education and business to communication and entertainment, provides virtually unlimited challenges for RL research,” explained the team behind AndroidEnv.

Image credits: DeepMind

An agent in AndroidEnv makes decisions based on images displayed on the screen, and makes moves mimicking the touchscreen actions. The screens of smartphones can now be turned into a playground for the RL agents, which try to mimic actions of human gestures like swiping and typing to book a cab or play chess. The 2D screen opens up infinite possibilities for the agent and hence makes it more challenging. The agent has to consider the types of clicks a certain application expects, pixel change, spatial correlation. The agent should know what to do with a drop-down button. 

How RL agents can be deployed on smartphones:

  • Initialise the environment by installing particular applications on the device. 
  • Reset an episode upon receiving a particular message from the device or app, or upon reaching a certain time limit. 
  • Once an episode is triggered, launch a given app and clear the cache.
  • Find rewards by considering log messages implemented in applications.

DeepMind’s AndroidEnv in a way is similar to what OpenAI tried five years ago with Universe. Universe is a software platform designed to measure and train RL agents through games, websites and other applications on the screen. Thanks to AI, today, computers can now see, hear, and translate languages with unprecedented accuracy. However, these systems still are categorised as “Narrow AI” — they lack the ability to do anything sensible outside of the domain they are trained in. In a standard training regime, wrote the OpenAI team back in 2016, the agents are initialised from scratch and are made to run randomly through millions of trials to repeat actions that can fetch them rewards. For generally intelligent agents to flourish they must experience a wide repertoire of tasks so they can develop problem solving strategies that can be efficiently reused in a new task.

According to Andrej Karpathy, Director of AI at Tesla, automation in the software realm (“world of bits”) is still a relatively overlooked AI dev platform. Karpathy predicts that incorporating RL into real world environments such as Android devices can lead to AIs speaking to each other (via audio) in English, or using UI/UX interfaces originally built for humans in both software or hardware. “Seems quite likely that AIs of the future operate on “human native” interfaces instead of purpose-built APIs despite the ridiculous inefficiency,” tweeted Karpathy. 

DeepMind has open-sourced AndroidEnv in the form of a Python library designed to provide a flexible platform for defining custom tasks on top of the Android Operating System, including any Android application. To experiment with AndroidEnv, you need to install Android Studio and then set up an Android Virtual Device(AVD). 

Get started with AndroidEnv here.

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.