MITB Banner

MIT Researchers Develop New Silicon Chip To Boost Applications In AR/VR, 5G & Gaming

GRAND chip can effectively decode any redundancy code up to 128 bits in length, with only about a microsecond of latency.

Share

MIT Researchers Develop New Silicon Chip To Boost Applications In AR/VR, 5G & Gaming

Researchers at MIT, Maynooth University, and Boston University recently created the first silicon chip using a universal decoding algorithm called GRAND aka Guessing Random Additive Noise Decoding, that can decode any code with greater accuracy, irrespective of its structure.  

GRAND has eliminated the need for multiple, computationally complex decoders. The chip has enabled increased efficiency that could have applications in augmented and virtual reality (AR/VR), 5G networks, gaming and connected devices that process a high volume of data with minimal delay

Backed by the Battelle Memorial Institute and Science Foundation of Ireland, the research is expected to be presented at the European Solid States Device Research and Circuits Conference (ESSCIRC ESSDERC) held this week. 

How Noise hampers Data Transfer 

Every piece of information that travels over the internet, from paragraphs in an email to 3D graphics in a virtual reality environment, can be altered by the noise it encounters along the way, say, electromagnetic interference from a microwave or Bluetooth device. Typically, the data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that disturbances and retrieve the original data. 

Traditionally, most error-correcting codes and decoding algorithms have been designed together. As a result, each code had a structure corresponding with a distinct, highly complex decoding algorithm, which often required dedicated hardware. Thanks to GRAND, it has eliminated the need for multiple, complex hardware components. 

To understand how GRAND works, let’s think of these codes as redundant hashes (1s & 0s) added to the end of the original data. The rules of creating that hash are stored in a particular codebook.

As the encoded data travels on the network, they are affected by noise that disrupts the signal, which is often generated by electronic devices. So, when they (coded data and the noise that affected them) arrive at their respective destination, the decoding algorithm checks its codebook and uses the structure of the hash to guess what the stored information is. 

The way GRAND works is that it guesses the noise affecting the message and uses the noise pattern to deduce the original information. It generates a series of noise sequences in the order they are likely to occur, deducts them from the received data, and reviews to see if the codeword is in a codebook. 

This becomes possible because the noise has a particular structure that allows the algorithm to guess what it might be, even though the noise appears random. Muriel Médard, a researcher at MIT, said it is similar to troubleshooting. 

Giving an example of a mechanic shop, she said, “If someone brings their car into the shop, the mechanic does not start by mapping the entire car to blueprints. Instead, they start by asking — what is the most likely thing to go wrong? Maybe it just needs gas. If that does not work, what’s next? Maybe the battery is dead.” 

Inside GRAND chip

The GRAND chip uses a three-tiered structure, including the simplest possible solutions in the first stage and longer and more complex noise patterns in the two subsequent stages. As a result, each stage operates separately, which increases the system’s throughput and saves power. 

Also, the device is designed to switch seamlessly between two codebooks — one cracks codewords, while the other loads a new codebook and later switches to decoding without any downtime or delay. 

In terms of the experimental outcome, the researchers found that the GRAND chip could effectively decode any redundancy code up to 128 bits in length, with only about a microsecond of latency. 

The Journey 

Previously, MIT researchers had demonstrated the algorithm’s success, but, with their latest work, they have managed to showcase the effectiveness and efficiency of GRAND in hardware. Médard said that developing hardware for the novel decoding algorithm required researchers first to toss aside their preconceptions.

She said that they could have gone out and reused things that were already being done. But, they decided to rethink every single aspect from scratch. “It was a journey of reconsideration,” added Médard. 

What’s next?

As GRAND uses codebooks for verification, the researchers believe that the chip not only works with legacy codes but can be used with codes that haven’t even been introduced yet. 

For instance, in the case of 5G implementation, telecom providers and regulators struggle to find common ground for which codes need to be used in the new network. Unfortunately, regulators often tend to choose traditional codes for 5G infrastructure in different scenarios. Leveraging GRAND could help in eliminating the need for rigid standardisation in the future, said Médard. 

Furthermore, the researchers believe that their chip could even open a new wave of innovation in coding. “I am hoping this will recast the discussion, so it is not so standards-oriented, enabling people to use codes that already exist and create new codes,” she added

In the coming months, the researchers plan to tackle problems around soft detection with a new version of the GRAND chip, as received data in soft detection are less accurate. Also, they plan to test the ability of the chip to crack longer, more complex codes and further adjust the structure of the silicon chip, improving its energy efficiency. 

Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.