A Technical Journalist who loves writing about Machine Learning and…
Researchers and technologists around the globe are now striving to develop artificial general intelligence (AGI) in order to achieve one common target — build efficient models which are more human-like.
Now, China has been pushing itself to accelerate AI quite seriously and has been doing massive investments in this sector. According to reports, China’s spending on AI for technology and communication industry has increased to 71.6% during 2018 to reach $320 million. It is expected that their spending on AI technologies will increase at a CAGR of 25.9%, increasing from $481.2 million in 2019 to reach $2,406.1 million by 2025.
Last month, a team of Chinese researchers introduced a new hybrid chip known as Tianjic chip to stimulate AGI development by paving the way to more generalised hardware platforms. This chip is a hybrid platform which supports computer science-oriented ML algorithms, as well as neuroscience-inspired models and algorithms. In order to support the parallel processing of large networks or multiple networks concurrently, the chip adopts a many-core architecture with scattered localised memory for timely and seamless communication.
Fusing Neuroscience With ML
The ML approach to AGI involves explicit algorithms which are executed on computers. For instance, the artificial neural network is inspired by the cortex in terms of spatial complexity which is being implemented in a number of applications such as speech recognition, image classification, language processing, etc.
On the other hand, the neuroscience-based approach to AGI attempts to mimic the cerebral cortex which is based on observations of the interaction between memory and computing, rich spatiotemporal dynamics, spike-based coding schemes, and various learning rules. Both the approaches, solve problems efficiently in their specific domains.
However, achieving AGI is still a dream for the researchers and to accomplish this dream, researchers are incorporating more biologically inspired models and algorithms into the prevailing artificial neural networks.
Keeping ML and neuroscience in mind, the researchers tried to build an AGI system which has several features such as support for vast and complex neural networks that can represent rich spatial, temporal and spatiotemporal relationships, support for hierarchical, multi granular and multidomain network topologies, support for a wide range of models, algorithms and coding schemes and support for the intertwined cooperation of multiple specialised neural networks that are designed for different tasks in parallel processing.
Fig: Hybrid Architecture Of Tianjic Chip (Source)
The Tianjic chip consists of 156 FCores, containing approximately 40,000 neurons and 10 million synapses. Fabricated using 28-nm processing technology, this chip occupies a die area of 3.8 × 3.8 mm2. The chip area occupied by each individual block, includes axon, dendrite, soma, router, controller and other chip overheads. With its distributed on-chip memory and decentralised many-core architecture, this chip provides an internal memory bandwidth of more than 610 gigabytes (GB) per second, and yields an effective peak performance of 1.28 tera operations per second (TOPS) per watt in ANN mode when running at 300 MHz.
Advantages Of The Tianjic Chip
- The Tianjic chip can provide improved throughput (1.6 to 102 times) and power efficiency (12 to 104 times) over the GPU by just forming a parallel on-chip memory hierarchy and organising the dataflow in a streaming fashion.
- It enables the concurrent deployment of multiple expert networks within one chip, including most types of SNNs and ANNs.
- It supports heterogeneous neural networks with a deep fusion of the two paradigms.
- The use of this chip enables the exploration of more biologically plausible cognitive models.
The Tianjic chip is able to support diverse neural network models, including neuroscience-inspired networks such as SNNs and rate-based biologically inspired neural networks as well as computer-science-oriented networks for such as MLP, CNNs, and RNNs. In one of our articles, we already talked about the powerful RISC-V processor chip named Xuantie 910 by Chinese tech giant Alibaba.
Register for our upcoming events:
- Meetup: NVIDIA RAPIDS GPU-Accelerated Data Analytics & Machine Learning Workshop, 18th Oct, Bangalore
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: firstname.lastname@example.org