This year at the Apple Worldwide Developers Conference held in California, the tech giant released a number of intuitive tool upgrades for more advanced machine learning model personalisation. One of them is the Core ML 3, a framework that helps in integrating machine learning models into applications.
In one of our articles, we already discussed a little about the released tools and upgrades. Today, we will discuss the Core ML 3, the machine learning framework by Apple, its new features and how it is different from the previous version.
What Is Core ML
Core ML is a machine learning framework which helps in building intelligent features like QuickType, Siri, among others. This framework helps the developers to implement machine learning techniques in their applications with just a few lines of codes. It enables advanced neural network techniques with support for over 100 layer types as well as provides maximum performance and efficiency.
Meet Core ML 3
The updated version of this framework brings five important enhancements to the framework for the development of machine learning techniques.
The on-device training is one of the interesting features added by Apple. This time, the framework supports both the training and inference directly on the user’s device. During the session at the conference, Anil Katti, Core ML developer showed a demo on how the model can be personalised while using the device. He showed a personalised paper grading app where a user can create customised sketches for emoji stickers which can be added while grading the paper.
Thus, using Core ML models, a user can drive intelligent features like search or object recognition in photos. Core ML 3 is updated with a feature for on-device training where the models can be updated with user data on-device which in result becomes relevant to the user behaviour as well as assuring complete privacy.
Support For Advanced Neural Networks
Core ML 3 enables a user to run on-device sophisticated machine learning models such as advanced neural networks which are mainly designed to understand media like image, video, sound, among others.
Core ML 3 includes a number of easily build Computer Vision features which include facial detection, tracking, text recognition, image saliency and classification, image similarity identification and capture quality. Other computer vision machine learning features include improved landmark detection, rectangle detection, barcode detection, object tracking, and image registration. A user can also use the new Document Camera API to detect and capture documents using the camera.
A user can utilise the Core ML framework with Create ML in order to train and deploy custom natural language processing (NLP) models. With the help of Core ML 3, one can analyse natural language text as well as deduce its language-specific metadata for deep understanding. It includes important features such as transfer learning for Create ML text models, word embeddings, sentiment classification, and a text catalogue.
Now, a user can take advantage of on-device speech recognition for 10 languages. Core ML 3 includes speech saliency features such as pronunciation information, streaming confidence, utterance detection, and acoustic features.
Core ML 3 Vs Older Versions
The first version of Core ML was introduced in June 2017 with the launch of iOS 11. Then, last year at the Developer’s Conference, Apple launched the Core ML 2 as the updated version. More specifically, Core ML is used to integrate machine learning models into an application. The applications use Core ML APIs along with the user data in order to make predictions and to train machine learning models on the user’s device.
Core ML 3 enables interactive personalisation on-device. This framework supports convolution and fully-connected neural network layers as well as having the ability to back-propagate through the models. It also supports other methods such as categorical cross-entropy, mean squared error loss, stochastic gradient descent, Adam optimisation strategies, and other such parameters.