The traditional principles of wireless signal processing are based on decades-old algorithms. Especially when it comes to deploying power-hungry 5G networks, these hackneyed approaches have proven to be counterproductive. Like any other domain, the telecom industry is now looking up to AI and deep learning for potential solutions, and somehow, NVIDIA’s GPUs have found themselves at the heart of this mega transition.
For some tough telecom problems, there is no math formulation, but AI can learn the problem models automatically.
One is likely to hit roadblocks when using traditional computational techniques to solve complex scheduling problems under 100 microseconds. In 2018, a PhD student Yan Huang, from Virginia Tech, in his paper, presented the design of GPF – a GPU-based proportional fair (PF) scheduler – that can meet the ∼100 μs time requirement.
Sign up for your weekly dose of what's up in emerging technology.
Using GPUs changed the way researchers looked at AI techniques for 5G applications. This is where NVIDIA comes into the picture.
How NVIDIA Stepped In
“Fusing 5G, supercomputing, and AI has enabled us to create a revolutionary communications platform supporting, someday, trillions of always-on, AI-enabled smart devices. Combining our world-leading capabilities, NVIDIA and Ericsson are helping invent this exciting future. 5G is set to turbocharge the intelligent edge revolution,” said Huang, CEO of NVIDIA at the Mobile World Congress last year.
NVIDIA has also debuted Aerial, a software development kit for accelerating vRANs. And partners Ericsson, Microsoft and RedHat are working with us to deliver 5G at the edge of the network powered by GPUs.
NVIDIA Aerial provides GPU advantages for 5G with the following tools:
- cuVNF: provides optimised input/output (IO) and packet placement on the GPU whereby 5G packets are directly sent to GPU memory from GPUDirect capable network interface cards
- cuBB: provides a fully-offloaded 5G Signal Processing pipeline (5G L1) which delivers unprecedented throughput and efficiency by keeping all physical layer processing within the GPU’s high-performance memory
- cuPHY: provides a fully-offloaded 5G Signal Processing pipeline (L1 5G Phy) which delivers unprecedented throughput and efficiency by keeping all physical layer processing within the GPU’s high-performance memory
These vRANs will bring cellular network operators the kind of operational efficiencies that cloud service providers already enjoy. Carriers will program network functions in high-level software languages, easing the work of adding new capabilities and deploying capacity where and when needed.
Addressing one of 5G’s top challenges, researchers at Arizona State University showed a new method for directing millimeter wave beams, leveraging AI and the ray-tracing features in NVIDIA Turing GPUs.
And Professor Terng-Yin Hsu described a campus network at Taiwan’s National Chiao Tung University that ran a software-defined cellular base station on NVIDIA GPUs.
“We are very much at the beginning, especially in AI for vRAN,” said Stanczak. “In the end, I think we will use hybrid solutions that are driven both by data and domain knowledge.”
Compared to 4G LTE, 5G targets a much broader set of use cases with a much more complex air interface. “AI methods, such as machine learning, are promising solutions to tackle these challenges,” said Hou of Virginia Tech.
A lurking threat behind the promise of 5G delivering vast amounts of data as today’s networks is that 5G could also consume up as much energy. A significant challenge still is to maintain the efficiency of 5G deployments while bringing down energy consumption.
The 5G industry leaders believe that the breakthroughs from AI approaches can offer a breakthrough in dramatically increasing performance and improving spectral efficiency, which is important because the spectrum is costly.