MITB Banner

Apple finally embraces open source

Apple introduced its first Neural Engine in September 2017.
Share
Listen to this story

Apple is open-sourcing a reference PyTorch implementation of the Transformer architecture to help developers deploy Transformer models on Apple devices. In 2017, Google launched the Transformers models. Since then, it has become the model of choice for natural language processing (NLP) problems.

Transformers’ self-attention mechanism helps models to focus on certain parts of the input and reason more effectively. The Generative Pretrained Transformer (GPT-3) and Bidirectional Encoder Representations from Transformers (BERT) are some of the popular transformers models.

Register for this Masterclass >>

Apple is now leveraging the Transformer architecture for an increasing number of ML models. This architecture helps enable experiences such as panoptic segmentation in Camera with HyperDETR, on-device scene analysis in Photos, image captioning for accessibility, machine translation, and many others.

Apple Neural Engines

Apple introduced its first Neural Engine in September 2017 as part of the Apple A11’ Bionic’ chip. In 2018, it released an API named Core ML to allow developers to take advantage of the Apple Neural Engine in the Apple A12.

In 2017, Neural Engine was only available on the iPhone. Now, it’s available on the iPad (starting with the A12 chip) and Mac (starting with the M1 chip).

In the recently held Apple WorldWide Developers Conference (WWDC) 2022, Apple introduced the Apple M2 with 16 Neural Engine cores that could deliver over 40 percent faster performance than its predecessor.

(Source: Apple wiki)

The Transformer architecture has impacted many fields, including NLP and computer vision. The reference PyTorch implementation is specifically optimised for the Apple Neural Engine (ANE), which is a group of specialised cores functioning as a neural processing unit (NPU) to accelerate AI and ML workloads.

According to Apple, the implementation will help developers minimise the impact of their ML inference workloads on app memory, responsiveness, and device battery life. The increasing adoption of on-device ML deployment will also go a long way in protecting user privacy since data for inference workloads remains on-device.

Apple has shared four important principles behind the reference implementation to help developers optimise their models for ANE execution.

Principle 1: Picking the Right Data Format

Principle 2: Chunking Large Intermediate Tensors

Principle 3: Minimising Memory Copies

Principle 4: Handling Bandwidth-Boundness

What’s the real motive?

Apple, in general, is not known for its contribution to AI and ML, even though the company has invested heavily in these technologies.

As a company, Apple behaves like a cult. Nobody knows what goes inside Apple’s four walls. For the common man, Apple is a consumer electronics firm unlike tech giants such as Google or Microsoft. Google, for example, is seen as a leader in AI, with top AI talents working for the company and has released numerous research papers over the years. Google also owns Deepmind, another company leading in AI research.

Apple is struggling with recruiting top AI talents, and for good reasons. “Apple with its top-five rank employer brand image is currently having difficulty recruiting top AI talent. In fact, in order to let potential recruits see some of the exciting machine-learning work that is occurring at Apple, it recently had to alter its incredibly secretive culture and to offer a publicly visible Apple Machine Learning Journal,” said Dr author John Sullivan.

Over the last couple of years, Apple has increased its engagement with the AI/ML community.

In 2016, Apple announced it would allow its AI and ML researchers to publish and share their work. Next year, Apple’s first publicly issued academic paper won a Best Paper Award at the 2017 Conference on Computer Vision & Pattern Recognition. Over the years, it has launched AI/ML tools to speed up machine learning on iPhones . For example, Apple started using deep learning for face detection in iOS 10. With the release of the Vision framework, developers can now use this technology and many other computer vision algorithms in their apps. “We faced significant challenges in developing the framework so that we could preserve user privacy and run efficiently on-device.” Apple also launched the ‘Apple Machine Learning Journal’ website.

In 2020, the Cupertino-based tech giant announced a new residency programme for AI and ML experts. The latest move to open-source a reference PyTorch implementation for deploying the Transformer architecture on Apple Neural Engine also signals a shift in Apple’s attitude towards open source.

PS: The story was written using a keyboard.
Share
Picture of Pritam Bordoloi

Pritam Bordoloi

I have a keen interest in creative writing and artificial intelligence. As a journalist, I deep dive into the world of technology and analyse how it’s restructuring business models and reshaping society.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India