For long, Apple was written off by industry experts for falling behind in the AI race. The hardware-centric company also kept its research under a wrap, terming Apple’s AI research as the “walled garden”. Much has changed since the recent iPhone X launch event, which catapulted the Cupertino company to #1 position besting Google, a research report indicates. While facial recognition feature FaceID and the dual core neural engine chip grabbed maximum attention (the former for security concerns), we shine a light on image recognition technology and how it will pay off for Apple.
Apple’s frenetic investment in image recognition are improving the iPhone

Today, iPhone 7 can run image recognition algorithms much faster than the best devices built on Google’s Android mobile operating system, emphasized Apple’s Senior Vice President of Software Engineering Craig Federighi. So why is Apple that recently debuted its 3D facial recognition system FaceID betting big on image recognition technology?
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
To begin with, there is a tremendous business value in image recognition technology. Over the years, advances in image recognition technology go far beyond identifying photos in apps. The technology has made its way into everyday use with a bunch of use cases – auto-organising untagged photos in catalogues, autonomous car system detecting large object, gleaning insights from images shared on social media platforms and now in healthcare where image recognition software sifts through MRIs and CT scans accurately just like a radiologist. IBM uses image recognition technology to process massive quantities of medical images.
Image Recognition = New Opportunities
Download our Mobile App
So, how does image recognition tech benefit Apple? “The iPhone 7 can process more than 600 images per minute using a standard piece of image recognition software and performs faster than Google Pixel and Samsung Galaxy S8 phone”, Federighi said reportedly. The world’s most valued tech company is now aiming for perfection across its devices and given that AI is going to play a key role in defining Apple’s software, we are likely to see more investment in image recognition technology.
Even though experts believe Apple was late to the AI party, it has still managed to best its competition when it comes to privacy issues. According to Apple executives, there are several privacy issues when photos are first uploaded in the cloud computing systems. Tech behemoths Google and Facebook generally run image-recognition algorithms inside their cloud computing systems, which means that photos are uploaded to the company’s servers first. Even though there are detailed guidelines for data privacy, Google and Facebook has been known for widely monetizing customer data. It is here that Apple sets itself apart from its competitors with a differential policy by keeping personal data under control. In fact, Apple processes user data on the phone for privacy concerns.
How would image recognition translate into big bucks? With its new neural engine becoming part of the A11 processor, the AI chip will become the de-facto standard for future Apple devices. This has opened up new uses that could drive sales of its products. Case in point – the company is hard at work to persuade developers to develop a host of new features for the iPhone, based on image recognition that could drives sales. With its new mobile hardware in place, Apple has made it easy to run powerful machine learning algorithms without sapping the device’s battery.
Another research paper, titled Learning from Simulated and Unsupervised Images through Adversarial Training talks about leveraging synthetic images rather than real-life images that are hard to obtain and have to be annotated. The paper cites using images synthetic images from video games and computer generated that can be used to develop neural networks more efficiently than making them learn from real-life images. According to the paper, real-life images require annotation which is a very laborious and time-consuming process and this can’t be achieved without manual input. On the other end, computer generated images are already annotated.
Outlook: How will Apple’s play in image recognition pay off
Here’s our point of view: there are many potential benefits of investing in image recognition. Earlier this year, Apple made a bold attempt to woo developer community by releasing Core ML – a set of machine learning models and APIs. Industry experts tout that developers can use these models to build image recognition into their photo apps. In fact, Apple had released four of these models for image recognition and also an API for natural language processing and computer vision. Another interesting feature is that most of the pre-trained machine learning models offered by Apple are open-sourced Google code, primarily aimed for image recognition.