Now Reading
Top 6 Smartphones That Flaunted AI In 2019

Top 6 Smartphones That Flaunted AI In 2019

Ram Sagar

2019 has been a very busy year for smartphone makers. They left no stone unturned in their attempt to market AI as their USP. The extent to which AI is being used in devices is still dubious. However, computational photography is one aspect that has changed dramatically since the advent of advanced AI. 



With all chip vendors and phone makers like Apple and Samsung, optimising the hardware to suit the ever-growing needs and aesthetics on mobiles, running state-of-the-art deep learning models on smartphones is slowly becoming a common sight. 

Today devices having Qualcomm and other top systems on a chip (SoCs) come with a dedicated AI hardware designed to run machine learning workloads on embedded AI accelerators. 


W3Schools

Here we list top AI smartphones that have made noise this year:

Apple iPhone

Ever since the launch of the first iPhone back in 2007, Apple has always committed itself to innovation, providing its customers with an experience that makes them feel stand out of the crowd. Though many of the features that Apple flaunts, can be found in other cheaper phones, people still flock to buy a new product from Apple. This is because Apple has never compromised in quality and to fulfil this, they have always deployed cutting edge technology.

With the hardware having finally caught up with the computational rigour of algorithms, Apple took up their quality a notch up by implementing stat of the art machine learning algorithms.

To enable high-quality video recording and photo capture, Apple has developed A 13 bionic chip.

In their annual event, this year, Apple unveiled three new smartphones: the iPhone 11, the iPhone 11 Pro and the iPhone 11 Pro Max and in their keynote talks, the engineers have laid more emphasis on how they are working towards making machine learning and declared that they have the best machine learning platforms on mobiles.

The new A13 Bionic chip that powers the iPhone 11, is touted as its faster processor ever. 

An Apple-designed 64-bit ARMv8.3-A six-core CPU, with two high-performance cores running at 2.65 GHz and four energy-efficient cores make A13 powerhouse for machine learning applications.

All the CPU, GPU and neural engines are optimised for operating different machine learning workloads that keeps the workflow smooth while cutting down the execution time.

Apple uses a technique called ‘Deep Fusion’ that combines images from all three lenses and running a neural network in the background.

The machine learning models in the iPhone 11’s cameras combined with the chip’s speed, allows users to shoot 4K video at 60 fps with HDR enabled.

Google Pixel 4

Though Google has been late to the party, thanks to its decades of research gains, it was able to go toe to toe with the likes of Apple and other high end smartphones that use deep learning.

Google continues to be one of the pioneers of ML research. They are single-handedly responsible for introducing models that took the accuracy of computer vision and NLP models to a whole new level.

And, their Pixel flagship models are a testimony to Google’s ingenuity.

Their latest, Pixel 4 has one of the best smartphone cameras and is able to facilitate high-quality astrophotography thanks in large part to AI and a dedicated AI coprocessor called the pixel Visual Core that improves upon the imaging chip in the Pixel 3 — power-efficiently crunches millions to trillions of operations per second, accelerating the Pixel 4 series.

taken on Pixel4 via Google blog

Night Sight, introduced on the Google Camera App, allows professional photographers to take high-quality handheld shots in dark environments, which otherwise would give grainy and severely underexposed images.

Google trained a convolutional neural network on over 100,000 images that were manually labelled by tracing the outlines of sky regions, identifies each pixel in a photograph as “sky” or “not sky.” 

Sky detection also makes it possible to capture features like the milky way in a more prominent way.

Xiaomi

Xiaomi’s flagship model contains a high-performance octa-core processor combined with a third-generation AI engine aimed at boosting computing performance and providing extraordinary gaming experience.

Xiaomi’s Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices.

According to Xiaomi, the AutoML model in the forthcoming MACE-Kit nearly leads the industry by data set performance. As for MiNLP, the company’s natural language processing platform, it’s now activated over 6 million times on a daily basis.

See Also
Flashback 2019: Top 6 Tech Talks From The Rising

MACE Model Zoo contains several common neural networks and models which will be built daily against a list of mobile phones. 

Qualcomm Hexagon NN Offload Framework is developed using MACE. Hexagon DSP embedded in Snapdragon processors to help you create compelling multimedia user experiences by providing improvements in the power dissipation and performance of audio, imaging embedded vision, video and computationally intensive applications.

Huawei

Huawei’s Neural Processing Unit (NPU) has been optimized for machine learning frameworks like Facebook’s Caffe2 and Google’s TensorFlow. One of the cores in their neural processing unit’s architecture were designed to be 24 times more efficient than a general-purpose processor core for tasks like facial recognition.

By adopting the advanced CPU and GPU multi-core architecture, the makers have tried to offer vast AI computing and the other focuses on specific tasks to create improved user experience.

Realme

Realme’s Helio P60 AI Processor empowers system on chip with unprecedented wisdom. The new octa-core CPU and high performance GPU contribute to a boost in efficiency by 70% while heavy gaming. Aided by 12nm process technology, 15% of power consumption is saved while the performance remains strong.

Asus ROG II

AI Scene Detection in 16 types: food, sky, greenfield, plant, ocean, sunset, snow, flower, stage, dog, cat, people, text, tripod, QR code, night view. This phone is originally designed to allow gamers to have a smooth experience while playing adrenaline pumping games.

However, Asus has also made sure that this phone has AI scene detection. This kind of scene detection has become a norm in the phones that boast about high-quality photography.

Bottomline

Along with the above-mentioned mobiles, there has been a constant stream of many variants of respective flagship models of OnePlus, Samsung and others. OnePlus too has listed AI in the scene detection segment under specs. 

The bottom line here is, from Apple to yesteryear’s giants such as Nokia, use of AI for both obvious tasks like photography and for trivial optimisations in the background has become a norm. 

The next step would be for many makers to incorporate machine learning models that are now being redesigned for deployment on mobiles.

For instance, at the TensorFlow’s developer summit, held earlier this year, the team announced the open sourcing of TensorFlow Lite for mobile devices. Along with this, they have also unveiled two development boards Sparkfun and Coral which are based on TensorFlow Lite for performing machine learning tasks on handheld devices like smartphones.

Platforms such as these aims at making smartphones, the next best choice to run machine learning models. These proceedings only mean that in a couple of years, all mid-range and high-end chipsets will get enough power to run the vast majority of standard deep learning models developed by the research community and industry.

Not only chipmakers but there is a lot coming from the other end as well. The pace at which above mentioned developments are surfacing, it is safe to assume that the goal to make smartphones the next hub for deploying ML models is soon going to be a reality.

Provide your comments below

comments

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top