MITB Banner

Samsung Creates On-Device Lightweight AI Technology That Is 8 Times Faster

Share

South Korean tech giant Samsung Electronics has already started to make huge investments in the field of emerging technologies. The mobile tech giant has set its foot to become a leading force in building and energizing the ecosystem of innovative businesses for the digital economy.   

Recently, researchers from the Samsung Advanced Institute of Technology (SAIT) announced the successful development of On-Device AI lightweight technology. Currently, the existing deep learning data for servers is of 32-bit networks but this lightweight technology has accomplished to make the computations 8 times faster than the 32-bit deep learning networks. The system is said to be required less hardware and less electricity and will directly compute data all from within the device itself. 

Behind The Technique

According to the researchers, reducing the bit-widths of activations and weights of deep networks while maintaining and preserving the accuracy of data recognition will be more efficient in resource-limited devices such as mobile phones than the existing solutions. 

The researchers used a method called quantisation-interval-learning (QIL) which allows quantised networks to maintain the data accuracy of the full precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bitwidth reduction such as 3-bit and 2-bit. The researchers stated that the 4-bit networks preserve the accuracy of the full-precision networks with various architectures, 3-bit networks yield comparable accuracy to the full-precision networks, and the 2-bit networks suffer from minimal accuracy loss. 

Chang-Kyu Choi, Vice President and head of Computer Vision Lab of SAIT said, “Ultimately, in the future we will live in a world where all devices and sensor-based technologies are powered by AI,” he added, “Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.”

Deep learning is a core part of artificial intelligence. As for now, the existing algorithms carry a moderate speed as well as weight. But this On-Device AI technology will perform high speed and low power computations without the need for a cloud service. 

The researchers claimed that this technology is 8 times faster than the current technology. According to the researchers,  the On-Device AI technology has AI capabilities with NPU which will directly compute data all from within the device itself which is different than the existing AI techniques which require cloud server for deployment.

Benefits of On-Device AI

Privacy Protection

The researchers claim that the On-Device AI technology will help the users to keep safe their personal biometric information such as fingerprint, iris scans, face scans, etc. which are being used for device authentication. 

Low Latency

This new technology will provide the users to process a large amount of data with minimum latency as well as consuming less electricity. 

Low Power & Low Cost

According to the researchers, this technology operates on its own and provides quick and stable performance for use cases such as virtual reality and autonomous driving which will undoubtedly reduce the cost of cloud construction that is being utilised for AI operations.

Related Developments In AI

Earlier in 2018, Samsung Electronics introduced the mobile processor called Exynos 9820 which has the AI capabilities with the natural processing unit (NPU) without depending on an external cloud server that makes the processor perform AI-related functions seven times faster than its forerunner. This capability of this mobile processor is similar to one of the core features of On-Device AI, i.e. it has the capability to compute large amounts of data at a high speed without consuming excessive amounts of electricity.

This year in June, the tech giant had also announced to strengthen its neural processing unit (NPU) capabilities to further prolong the ability of its artificial intelligence (AI) solutions. The researchers are further planning to apply this algorithm not only to mobile System on Chip (SoC) but also to memory and sensor solutions. 

For further research on the On-Device AI technology, the researchers will apply linear functions and Bayesian approaches for more accurate parameterisation.

Share
Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.