The graph of emerging technology is growing at an exponential rate and enterprises are rushing to adopt the latest trends. From cars to healthcare, AI has proven its business applicability.
In this article, we discuss how leading tech giants use AI-based inference chips in the next step for evolving mobile devices. Smartphone manufacturers are now integrating faster AI capabilities in the devices, right from the user interface to the apps that people are using in their smartphones. Today, AI-powered phone use algorithms like natural language processing (NLP), Computer Vision, facial recognition techniques, etc.
In 2017, prominent chipmaker introduced its new Movidius™ Myriad™ X vision processing unit (VPU), advancing Intel’s end-to-end portfolio of AI solutions to deliver more autonomous capabilities across a wide range of product categories including drones, robotics, smart cameras, and virtual reality. This chip has a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge and is designed to run deep neural networks at a high-speed and low power without any loss of accuracy. The AI-based chip combines imaging, visual processing and deep learning inference in real time.
Mountain View tech giant’s tiny EDGE TPU is designed to run AI inference with high accuracy at the edge. The chip complements Cloud TPU and Google Cloud Services to provide an end-to-end, cloud-to-edge, hardware and software infrastructure for facilitating the deployment of customer’s AI-based solutions.
Qualcomm has launched AI-powered SoC (System on Chip), the QCS603 and QCS605 designed for mobile and IoT edge inferencing in Computer Vision applications. The 10nm QCS605 and QCS603 SoCs are engineered to deliver powerful computing for on-device camera processing and machine learning, with exceptional power and thermal efficiency, across a wide range of IoT applications. The target applications of these chips are industrial IoT, action & VR360 cameras, smart home AI security, smart displays, enterprise surveillance cameras, etc.
The Qualcomm Technologies Inc. (QTI) developed the Qualcomm Snapdragon Neural Processing Engine (NPE) SDK to accelerate neural network processing on Snapdragon devices that allow the developer to easily choose the optimal core for the specific user experience.
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
Last year, NVIDIA announced its next-generation AI-based graphics architecture called Turing. It is more than just a traditional GPU with accelerators for both AI tasks and ray tracing. Ray tracing is the new graphics rendering technique which simulates how light bounces in the physical world in a virtual village, creating the most realistic results.
Also, NVIDIA and ARM announced that they are partnering to bring deep learning inferencing to billions of mobile, consumer electronics and IoT devices that will enter the global marketplace. The partnership will integrate the open-source NVIDIA Deep Learning Accelerator (NVDLA) architecture into Arm’s Project Trillium platform for machine learning. It is supported by NVIDIA’s suite of powerful developer tools, including upcoming versions of TensorRT, a programmable deep learning accelerator. The open-source design allows for cutting-edge features to be added regularly, including contributions from the research community.
Also, NVIDIA GeForce GTX 1650 is reported to be launching in the month of March with a cost of US dollars $229.
Huawei launched its new generation of hyper-fast mobile chip, Kirin 970 that comes with embedded AI, Neural Processing Unit (NPU). It enables cloud-based artificial intelligence and on-device AI to run alongside each other. The Kirin 970 features ultra-fast connection, intelligent computing capability, HD audio-visual effects, and long battery life.
With the release of this chipset, Huawei aims to enable broader use of AI technology in the application field and provide consumers with a never-before-seen AI experience right in the palm of their hands. It can also realize the highest rate combination of each carrier and helps world-wide users achieve a higher Internet surfing speed.
Last year, this fabless semiconductor platform launched a new generation AI architecture, Helio P90 System on Chip(SoC) for a great AI processing boost. This chip is 4X powerful than the other Helio chipsets that help the users perform intensive AI tasks effortlessly with faster and more accurate results and at the same time delivering longer battery life.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: email@example.com