Opinions

Apple WWDC 2017 – everything announced around AI & Machine Learning

Even though Apple’s WWDC 2017 sounded all about the power-packed iOS 11 and key updates on “Apple’s heart and soul” – Mac OS, we gleaned all insights and updates on machine learning and artificial intelligence. One thing’s for sure, the Cupertino giant doesn’t want to lag in the AI race vis-à-vis Google, Amazon and Microsoft. At this year’s annual developer conference, AI made headway into most apps, such as the facial recognition in Photos app and Intelligent Tracking Prevention in Safari browser.

Here’s our round-up of all the announcements around AI/ML that matter:

Now, a Siri powered Watch Face: Let’s talk about Apple watchOS 4 that is powered by SIRI intelligence and it automatically displays the information most relevant to the wearer. Powered by machine learning the SIRI watch face can adapt automatically based on the routines and provide information such as weather, traffic and time. Over time, Siri will get to know the wearer better and leverage the power of deep learning techniques, to learn user preferences constantly. Siri will be synced across all devices and will become more context aware and will provide updates tailored to user.

Intelligent Tracking Prevention: The desktop Safari browser just got smarter. Announcing the machine learning powered feature, Craig Federighi, Apple’s senior vice president of Software Engineering, said onstage, “Safari uses machine learning to identify trackers, segregate the cross-site scripting data, put it away so now your privacy and your browsing history is your own”.  It is a significant move given the proliferation of online trackers in the last few years — that leads to page load lags and user privacy issues. An Apple blog cites Intelligent Tracking Prevention collects statistics on resource loads as well as user interactions such as taps, clicks, and text entries. The statistics are put into buckets per top privately-controlled domain or TLD+1. And a machine learning model is used to classify which top privately-controlled domains have the ability to track the user cross-site, based on the collected statistics.

Deep Learning makes Siri’s voice more natural:  Siri was undoubtedly the first conversational agent to come into mainstream and during the conference Federighi shared how deep learning had made Siri’s voice sound more natural. While Siri chirped about the sunny weather in three different ways, there’s a male Siri as well who rejoiced the audience with his love for machine learning. Now, less on the trivia and more on facts, Siri’s natural and smooth voice is powered by Mixture Density Networks, a type of type of deep learning, that powers Apple’s TTS engine that makes Siri’s voice smoother.

Apple’s new Metal 2 graphics API for machine learning: Metal is not just about graphics., declared Federighi announcing the Apple API will integrate machine learning capabilities such as Metal performance Shaders, Recurrent neural network kernels, binary convolution, dilated convolution, L-2 norm pooling, dilated pooling.

Face recognition in Photos app: High Sierra, the next version of Mac OS is packed with advanced face recognition capabilities and leverages Advanced convolution Neural networks and enables users to group / filter their photos based on who’s on them.  Another key feature is Memories, said Federighi onstage, “When you are not taking your photos, you go to enjoy them in the photos app. And one of the ways I love enjoying them is with Memories feature because Memories is able to scan my library — now I can do more using machine learning to identify things like sporting events, even weddings, anniversaries”. Machine Learning also made way into the iPad for palm rejection and is used automatically to extend battery life by predicting how to use the device.

Core ML: Apple has always been secretive about its AI efforts and likes to keep its research developments guarded behind closed doors. Now, Apple is bringing its AI technology to the developers who want to incorporate machine learning in the apps. “We are doing it with a set of new APIs. Starts with a vision API, which has face tracking, face detection, landmarks, all of these features which we use inside our apps, and a Natural Language API, that provides capabilities like tokenization and lemmatization, named entity recognition —  all built on core ML,” said Federighi.

Core ML provides high performance implementation through deep neural networks, recurrent neural networks, convolution neural networks, support vector machines, tree ensembles, linear models. “It allows to take this models that you built with any of these popular third party tools and through machine learning model converter one can execute them with tremendous performance on device and keep all the data private,” announced Federighi.   

Our Take

Industry experts believe Apple lagged behind the Valley biggies in implementing AI but now the Cupertino giant made great effort through acquisitions and appointments. However, AI isn’t certainly new to Apple that launched Siri, an offshoot of a DARPA program. Siri was then built into the OS and launched with much fanfare in 2011 at the iPhone4S launch. The company seems to adopt a more open approach with the

appointment of Carnegie Mellon University professor Ruslan Salakhutdinov as its first director of AI research.  Earlier last year in December, Apple published its first AI

research paper and is now on board the Partnership on AI, established last year to promote best practices in AI for good and humanity.

Provide your comments below

comments

Also Read  AXA Business Services is Utilizing Bangalore Centre to Step-up Data Science Innovation

Over 100,000 people subscribe to our newsletter.

See stories of Analytics and AI in your inbox.