How Apple Tuned Up Federated Learning For Its iPhones

“Apple ups its privacy game with federated systems on iPhones.”

Apply makeup, grow a beard or sit in the dark, your iPhone still can recognise you no matter how different you look from your passport photo. The technology that enables Face ID on Apple is one of the most advanced hardware and software solutions from the house of the Cupertino giant. For example, the TrueDepth camera captures accurate face data by projecting and analysing over 30,000 invisible dots to create a depth map of the user’s face. A portion of the neural engine of the A13 Bionic chip — protected within the Secure Enclave — transforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial data.

On-device machine learning comes with a privacy challenge. The extent to which the cameras and microphones record data can put an individual at great risk if their phones get hacked. There is a big possibility that apps might expose a search mechanism for information retrieval or in-app navigation. Hence, smartphone makers like Apple have ventured into Federated Learning for more than a couple of years now.

Overview of Federated Learning

(Source: Paper by Bonawitz et al.,)

“Federated Learning functions on the approach of bringing the code to the data, instead of the data to the code.”

Federated Learning (FL) is a distributed machine learning approach that enables training on a large corpus of decentralised data residing on devices like mobile phones. Federated Learning techniques are used to train ML models for triggering the suggestion feature, as well as ranking the items that can be suggested in the current context.  

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

A typical Federated Learning Protocol (Source: Google AI)




  • Devices call federated learning servers.
  • Server reads model checkpoint from storage.
  • Models are sent to the select devices.
  • Model training happens on the device and is updated back to the server.
  • Server aggregates these updates into a global model and writes them into storage.

Federated Learning applies best in situations where the on-device data is more relevant than the data that exists on servers (e.g., the devices generate the data in the first place), is privacy-sensitive, or otherwise undesirable or infeasible to transmit to servers. To address limitations of federated learning, the Apple researchers have experimented with other federated systems which do evaluation and tuning (FE&T).

How Apple Does It

“The conditions around our personalization led us to consider FE&T of on-device ML systems.”

Paulik et al.

Most of Apple’s features are extensions of federated learning in a way. But, the researchers, in their work, have elaborated on why they have explored Federated Evaluation and Tuning(FE&T). Differentiating between FL and FT, they write that Federated Learning(FL) requires model evaluation on held-out federated data. FL learns the parameters of, at times large global neural models. Whereas in Federated Tuning(FT), learning primarily occurs on the central server and is limited to a comparatively small set of personalization algorithm parameters that are evaluated across federated data. Within Apple systems, federated learning applications were used for improving acoustic keyword trigger models or federated learning of language models for an improved predictive keyboard & error correction experience. 

Applications around FE and FT occupy a large percentage of system usage. For instance, FE occurs on user interaction history. This significantly reduces turn-around times when compared to live A/B experimentation. Federated Evaluation, wrote the researchers,  can help quickly identify the most promising ML system or model candidates before exposing end users to these candidates via live A/B experimentation.

“On Apple devices, system’s on-device components therefore do not center around a neural model training library for task execution. Instead, implementation of on-device task execution is delegated to application specific plug-ins that communicate with our system’s on-device task scheduling logic, data store and results reporting logic,” explained the researchers.

According to the ML researchers at Apple, processing on end user devices as opposed to server-based processing is a valued approach in enabling end user privacy. This strategy extends to many of Apple’s machine learned (ML) solutions, such as text prediction in keyboards. The ability to personalize towards a user’s diction is highly desirable as the end goal of ML based personalization is enhanced user experience.

Here’s how news personalization works on your iPhone:

  • Derive ground truth and on-device evaluation for news personalization, from user interactions with news content. 
  • Store information on-device for articles such as: tap and read is a positive label and tap and unread is a negative label. 
  • The system attributes a range of values for each parameter.
  • During tuning task execution, the plug-in runs a randomized grid search and randomly generates configurations. 
  • These configurations are applied to the personalization algorithm which predicts the likelihood of a user reading an article.
  • The predictions are then compared with the ground truth labels to calculate a prediction loss for each randomly generated configuration.

Personalization of news reading or any other application based on ML is central to Apple’s core beliefs and hence federated systems have become critical to maintain privacy standards in the quest to improve user experience. The algorithms on Apple phones are governed by several parameters, such as the half life of the time decay on an article’s personalized score and as news changes, tuning of these parameters becomes challenging. This is where FT comes in handy. It allows continuous adaptation of these parameters with quick turnaround times so that the most relevant content can be surfaced despite changing trends.  Researchers said using FT has resulted in a 1.98% increase in daily article views and a 0.90% increase in the daily time spent within the application in two separate experiments.

Find the full research here.

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR