Smartphones are growing at an accelerated pace in the Indian subcontinent, creating a new market for smartphone manufacturers to capitalise on. The number of smartphone users in India grew by over 15.6% to reach 337 million in 2018, representing a unique opportunity for companies.
With the rise of 4G across India due to the low cost and easy availability of service providers like Jio, social media is undergoing a boom. With the contest for popularity getting hotter, the way of showing status on social media platforms has become posting high-quality images. Indeed, all the new smartphones are coming with AI-powered cameras that aim to fulfil this very purpose.
Putting an AI-powered chip in mobile phones seems to be the newest trend that manufacturers have adopted, with everyone racing to have the latest and greatest AI capabilities. However, this is a trend among smartphone makers that is now translating into a trickle-down benefit for the consumer. This benefit is usually presented in the form of better photos, without the need for better sensors or bigger megapixels.
The photo quality on many flagship smartphones today can be attributed to this effect of AI becoming integrated tightly into their camera modules. The on-device image processing capabilities of mobile devices have vastly increased, to the point where it is possible to get almost professional grade quality off a smartphone camera.
Read on as we take a look into how four different manufacturers use AI to stay relevant in the mobile market.
Google is probably the most famous and the best-equipped company to take over this vertical in smartphones. The search giant has historically had a heavy focus for AI-based applications and has brought that to its Pixel series of phones. The Pixel 2 featured AI image processing in a big way, as it contained a dedicated Image Processing Unit for this very purpose. This chip, designed by Google, has the capability to accelerate the processing of images captured by the Pixel's camera.
Since the shooter is HDR+ enabled, the image processing for the bigger dynamic range requires a large amount of compute. The IPU, combined with the tight integration Google has with its components, can accelerate HDR+ by 5x at less than 1/10th the energy used normally. This also created a visible difference in the dynamic range of the pictures, which increased the overall quality. Moreover, Google has also instituted something known as the Pixel Visual Core, which is now present in the Pixel 3 phone as well. The Visual Core applies a suite of AI-based processes to increase image quality.
The latest Pixel also comes with a feature known as Top Shot, which picks out a photo from a series of photos where the subject has their eyes open and is smiling. They also use on-device deep learning to allow for lossless 2x digital zoom, which is a feature rarely seen in phone cameras.
Huawei has also begun a huge push towards AI with the launch of their Kirin processor, which is a part of the phone's internals that handles both the processing and various AI-driven tasks. They started taking this approach during the development of their then-flagship Huawei P20 smartphone, which featured a Neural Processing Unit inbuilt to their Kirin SoC. This NPU performed the function of running what the company called the 'Master AI' feature.
Master AI is a model that has been trained to recognise over 500 objects and categorise them into 19 scenes. Reportedly, Huawei achieved this by training the model with over 10 million images until it could do this. The AI detects the object and applies a 'scene' to the camera's input. This is then processed accordingly, which has the effect of optimising certain scenes or images.
A scene is just a collection of camera settings such as saturation, brightness, ISO, sharpness and colour. For example, if the Master AI model recognises a flower in the scene, it will immediately focus on it and increase the saturation level, along with introducing a bokeh effect to the background to ensure bigger impact. This means that users can just click and shoot with professional-grade customisation features.
Apple has been no stranger to using AI chips on their phones, introducing the A11 Bionic chip to power its infrared-enabled FaceID technology. However, with the launch of the latest generation of iPhone in the XS and XR lineup, Apple has introduced a host of improvements to their cameras that harness the power of the Neural Engine chip. The cameras also capitalise on the TrueDepth infrared sensor for portrait selfies and feature an adjustable depth of field for pictures.
This is said to be done by "integrating the ISP, the neural engine and advanced algorithms", according to Apple. The bottom line is that the iPhone's camera is now improved with the A12 Bionic chip playing a huge role in this advancement. The Bionic chip performs a multitude of operations, first recognising the subject's face, landmarking it, and mapping the subject's depth in the frame. This provides a shot with a narrow DoF and heavy focus.
All in all, Apple has carved out a new niche for themselves in the AI camera market, as they have a dedicated neural processor which is more powerful and capable than Google's offering, paving the way for future camera optimisations in a future-proof manner.
In a movement that began at around the same time as the other competitors, Samsung joined the field with the AI-enabled camera features on the Galaxy Note 9. These features were derived from a standalone Image Processing Unit, similar to that used by Google. The Samsung Note 9 offered features such as the scene optimisation with over 20 different scenes and a vastly improved camera quality over the S9.
The trend continued with the launch of the S10, except that this time there was a tighter integration with the guts of the machine. This allowed Samsung to further make the IPU-enabled scene selection process better, along with HDR10+ video recording, which is an industry first in a smartphone. Along with this, the phone also comes with a shot suggestion, which stitches multiple photos together so that the user gets the best shot every time.
Register for our upcoming events:
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- WEBINAR: HOW TO BEGIN A CAREER IN DATA SCIENCE | 24th Oct
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.