The success of algorithms can be traced back to the hardware that mounts them. The explosion of System-on-Chip customised hardware onto the AI scene has revolutionised many real-world applications. Chips for AI acceleration have tremendous implications for applying AI to domains under significant constraints such as size, weight and power, both in embedded applications and in data centres.
In a survey supported by the Assistant Secretary of Defense for Research and Engineering under the Air Force, the researchers from MIT Lincoln Laboratory Supercomputing Center discussed the current state of machine learning hardware and what the future holds. Over the past few months, we have seen many releases from top chip makers like Nvidia and Intel. There have been other announcements too which were kept under wraps for later this year or next year. In this article, we take a look at accelerator chips, that according to the survey, have been announced but have not published any performance and power numbers.
Cloud AI 100
Qualcomm has announced their Cloud AI 100 accelerator, and with their experience in developing communications and smartphone technologies, they have the potential for releasing a chip that delivers high performance with low power draws. Qualcomm Cloud AI 100 uses advanced signal processing and cutting-edge power efficiency to support AI solutions for multiple environments, including the data centre, cloud edge, edge appliance, and 5G infrastructure.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Blaize GSP
Last month, Blaize announced its Graph Streaming Processor (GSP). The Blaize embedded and accelerator platforms are built on the Blaize Graph Streaming Processor (GSP) architecture designed for the demands of edge computing.

Blaize claims to introduce a new class of silicon, a fully programmable GSP architecture that leverages task-level parallelism and streaming execution processing in an intuitive manner. Developers can take advantage of extremely low energy consumption, high-performance and superior scalability.
With 16 GSP cores and 16TOPS of AI inference performance within a tiny 7W power envelope, GSP delivers up to 60x better system-level efficiency vs. GPU/CPUs for edge AI applications.
Intel’s Loihi
Neuromorphic research is an emerging field within the realms of AI hardware. The intuition here is to develop techniques that are inspired by how the real neurons in the brain functions. Hence the name neuromorphic. The low energy, the high-quality output of neurons have enthused the researchers into developing Spiking Neural Networks(SNNs). But, these SNNs need hardware of their own.
Intel has come up with Loihi, a fifth-generation self-learning neuromorphic research test chip, introduced in 2017. Loihi’s 128-core design is based on a specialised architecture optimised for SNN algorithms and fabricated on 14nm process technology. Loihi supports the operation of SNNs that do not need to be trained like convolutional neural networks. Know more about Loihi here.
Akida
The Akida NSoC represents a revolutionary new breed of Neural Processing computing devices for Edge AI devices and systems. Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing orders of magnitude better efficiency than other neural processing devices on the market. Comparisons to leading DNN accelerator devices show an order of magnitude better images/second/watt running industry standard benchmarks with MobileNet, MobileNet-SSD and Keyword Spotting while maintaining excellent accuracy.
aiCTX
SynSense is developing aiCTX DynapCNN, a low power, low-latency neuromorphic accelerator that executes single sample inference in under a milliwatt and under 10 ms. DYNAP-CNN is a scalable, fully-configurable digital event-driven neuromorphic processor with 1M ReLU spiking neurons per chip for implementing Spiking Convolutional Neural Networks (SCNN). In addition, DYNAP-CNN is scalable, enabling the implementation of deep neural networks with an unlimited number of layers over multiple interconnected DYNAP-CNNs.
Anaflash
In July, ANAFLASH Inc. was awarded a National Science Foundation (NSF) Small Business Innovation Research (SBIR) grant for $750,000 to conduct R&D work on logic-compatible non-volatile neural network accelerators using analogue compute-in-memory architecture. Anaflash chip is an e flash-based spiking neuromorphic chip that encodes 320 neurons with 68 interconnecting synapses per layer.
The company develops energy-efficient semiconductor solutions for battery-powered smart edge IoT devices. More reliable and scalable non-volatile computing can be enabled with its cost-effective standard logic-based technologies. ANAFLASH aims to provide smart edge AI solutions for the society who currently have concerns on the battery lifetimes and privacy protection.
NeuronFlow
The Grai Matter Labs chip called NeuronFlow has 1024 neurons per chip that can operate with 8-bit or 16-bit integer precision. Neuronflow is a neuromorphic, many core, data flow architecture that exploits brain-inspired concepts to deliver a scalable event-based processing engine for neuron networks in Live AI applications. NeuronFlow’s design is inspired by brain biology, but not necessarily biologically plausible. The main design goal is the exploitation of sparsity to dramatically reduce latency and power consumption as required by sensor processing at the Edge.
Koniku
The startup Koniku is modelling circuitry after the brain by designing a co-processor built with biological neurons. The core of the device is a structured microelectrode array system (SMEAS) that they call a Konikore. They have demonstrated that keeping neurons alive is a solvable engineering control problem: living neurons operate in defined parameter space, and they are developing hardware and algorithms which control the parameter space of the environment.
LightOn Aurora
We heard about GPUs and TPUs, LightOn builds on what they call as OPUs or Optical Processing Units. The company is developing Photonic AI Chips at scale. OPUs are highly integrated with CPUs and GPUs so that it boosts their respective performance. They can be seamlessly accessed through an open-source Python API called LightOnML, and through a lower level, proprietary API called LightOnOPU. Photonics is also a field to watch out for going forward for AI hardware.
For more information regarding the state of AI hardware, check this paper.