Listen to this story
|
In a move aimed at enhancing accessibility to AI for developers and researchers, AMD had unveiled ROCm 5.7 last month, the latest iteration of its open software ecosystem for accelerated computing. Now, the company has now extended its support for PyTorch machine learning (ML) to AMD RDNA 3-based graphics via ROCm 5.7.
This means that individuals engaged in working with ML models and algorithms in PyTorch can leverage AMD ROCm 5.7 on Ubuntu Linux. This enables them to harness the parallel computing capabilities offered by AMD Radeon RX 7900 XTX and Radeon PRO W7900 GPUs, which come equipped with 192 dedicated AI accelerators.
These accelerators promise up to 2X higher AI performance per compute unit in comparison to the preceding generation. The integration of ML on desktops provides users with a local, private, and cost-effective avenue for supporting ML training and inference, thereby reducing the reliance on cloud-based solutions.
Read: AMD’s Attempt to Break NVIDIA’s CUDA
AMD is also trying to break NVIDIA’s CUDA monopoly in the AI parallel computing segment. AMD has made a huge leap with its recent bet on Nod.ai, an open source AI software firm. Nod.ai has been known for developing a portfolio of tools and systems for boosting AI applications on AMD hardware.
The software bet has been going on at AMD for some time now. In August, the company also announced the acquisition of Mipsology, a French AI startup, which has also been a long-standing AMD partner and developing AI software for the chipmaker, similar to Nod.ai.