Samsung Creates On-Device Lightweight AI Technology That Is 8 Times Faster

South Korean tech giant Samsung Electronics has already started to make huge investments in the field of emerging technologies. The mobile tech giant has set its foot to become a leading force in building and energizing the ecosystem of innovative businesses for the digital economy.   

Recently, researchers from the Samsung Advanced Institute of Technology (SAIT) announced the successful development of On-Device AI lightweight technology. Currently, the existing deep learning data for servers is of 32-bit networks but this lightweight technology has accomplished to make the computations 8 times faster than the 32-bit deep learning networks. The system is said to be required less hardware and less electricity and will directly compute data all from within the device itself. 


Sign up for your weekly dose of what's up in emerging technology.

Behind The Technique

According to the researchers, reducing the bit-widths of activations and weights of deep networks while maintaining and preserving the accuracy of data recognition will be more efficient in resource-limited devices such as mobile phones than the existing solutions. 

The researchers used a method called quantisation-interval-learning (QIL) which allows quantised networks to maintain the data accuracy of the full precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bitwidth reduction such as 3-bit and 2-bit. The researchers stated that the 4-bit networks preserve the accuracy of the full-precision networks with various architectures, 3-bit networks yield comparable accuracy to the full-precision networks, and the 2-bit networks suffer from minimal accuracy loss. 

Download our Mobile App

Chang-Kyu Choi, Vice President and head of Computer Vision Lab of SAIT said, “Ultimately, in the future we will live in a world where all devices and sensor-based technologies are powered by AI,” he added, “Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.”

Deep learning is a core part of artificial intelligence. As for now, the existing algorithms carry a moderate speed as well as weight. But this On-Device AI technology will perform high speed and low power computations without the need for a cloud service. 

The researchers claimed that this technology is 8 times faster than the current technology. According to the researchers,  the On-Device AI technology has AI capabilities with NPU which will directly compute data all from within the device itself which is different than the existing AI techniques which require cloud server for deployment.

Benefits of On-Device AI

Privacy Protection

The researchers claim that the On-Device AI technology will help the users to keep safe their personal biometric information such as fingerprint, iris scans, face scans, etc. which are being used for device authentication. 

Low Latency

This new technology will provide the users to process a large amount of data with minimum latency as well as consuming less electricity. 

Low Power & Low Cost

According to the researchers, this technology operates on its own and provides quick and stable performance for use cases such as virtual reality and autonomous driving which will undoubtedly reduce the cost of cloud construction that is being utilised for AI operations.

Related Developments In AI

Earlier in 2018, Samsung Electronics introduced the mobile processor called Exynos 9820 which has the AI capabilities with the natural processing unit (NPU) without depending on an external cloud server that makes the processor perform AI-related functions seven times faster than its forerunner. This capability of this mobile processor is similar to one of the core features of On-Device AI, i.e. it has the capability to compute large amounts of data at a high speed without consuming excessive amounts of electricity.

This year in June, the tech giant had also announced to strengthen its neural processing unit (NPU) capabilities to further prolong the ability of its artificial intelligence (AI) solutions. The researchers are further planning to apply this algorithm not only to mobile System on Chip (SoC) but also to memory and sensor solutions. 

For further research on the On-Device AI technology, the researchers will apply linear functions and Bayesian approaches for more accurate parameterisation.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

AIM Upcoming Events

Early Bird Passes expire on 3rd Feb

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox