Apple finally embraces open source

Apple introduced its first Neural Engine in September 2017.
Listen to this story

Apple is open-sourcing a reference PyTorch implementation of the Transformer architecture to help developers deploy Transformer models on Apple devices. In 2017, Google launched the Transformers models. Since then, it has become the model of choice for natural language processing (NLP) problems.

Transformers’ self-attention mechanism helps models to focus on certain parts of the input and reason more effectively. The Generative Pretrained Transformer (GPT-3) and Bidirectional Encoder Representations from Transformers (BERT) are some of the popular transformers models.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Register for this Masterclass >>

Apple is now leveraging the Transformer architecture for an increasing number of ML models. This architecture helps enable experiences such as panoptic segmentation in Camera with HyperDETR, on-device scene analysis in Photos, image captioning for accessibility, machine translation, and many others.

Apple Neural Engines

Apple introduced its first Neural Engine in September 2017 as part of the Apple A11’ Bionic’ chip. In 2018, it released an API named Core ML to allow developers to take advantage of the Apple Neural Engine in the Apple A12.

In 2017, Neural Engine was only available on the iPhone. Now, it’s available on the iPad (starting with the A12 chip) and Mac (starting with the M1 chip).

In the recently held Apple WorldWide Developers Conference (WWDC) 2022, Apple introduced the Apple M2 with 16 Neural Engine cores that could deliver over 40 percent faster performance than its predecessor.

(Source: Apple wiki)

The Transformer architecture has impacted many fields, including NLP and computer vision. The reference PyTorch implementation is specifically optimised for the Apple Neural Engine (ANE), which is a group of specialised cores functioning as a neural processing unit (NPU) to accelerate AI and ML workloads.

According to Apple, the implementation will help developers minimise the impact of their ML inference workloads on app memory, responsiveness, and device battery life. The increasing adoption of on-device ML deployment will also go a long way in protecting user privacy since data for inference workloads remains on-device.

Apple has shared four important principles behind the reference implementation to help developers optimise their models for ANE execution.

Principle 1: Picking the Right Data Format

Principle 2: Chunking Large Intermediate Tensors

Principle 3: Minimising Memory Copies

Principle 4: Handling Bandwidth-Boundness

What’s the real motive?

Apple, in general, is not known for its contribution to AI and ML, even though the company has invested heavily in these technologies.

As a company, Apple behaves like a cult. Nobody knows what goes inside Apple’s four walls. For the common man, Apple is a consumer electronics firm unlike tech giants such as Google or Microsoft. Google, for example, is seen as a leader in AI, with top AI talents working for the company and has released numerous research papers over the years. Google also owns Deepmind, another company leading in AI research.

Apple is struggling with recruiting top AI talents, and for good reasons. “Apple with its top-five rank employer brand image is currently having difficulty recruiting top AI talent. In fact, in order to let potential recruits see some of the exciting machine-learning work that is occurring at Apple, it recently had to alter its incredibly secretive culture and to offer a publicly visible Apple Machine Learning Journal,” said Dr author John Sullivan.

Over the last couple of years, Apple has increased its engagement with the AI/ML community.

In 2016, Apple announced it would allow its AI and ML researchers to publish and share their work. Next year, Apple’s first publicly issued academic paper won a Best Paper Award at the 2017 Conference on Computer Vision & Pattern Recognition. Over the years, it has launched AI/ML tools to speed up machine learning on iPhones . For example, Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps. “We faced significant challenges in developing the framework so that we could preserve user privacy and run efficiently on-device.” Apple also launched the ‘Apple Machine Learning Journal’ website.

In 2020, the Cupertino-based tech giant announced a new residency programme for AI and ML experts. The latest move to open-source a reference PyTorch implementation for deploying the Transformer architecture on Apple Neural Engine also signals a shift in Apple’s attitude towards open source.

More Great AIM Stories

Pritam Bordoloi
I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.