Listen to this story
|
There is no doubt about the NVIDIA supremacy in the AI world with its chips and GPUs. However, many have predicted that there may be a point in the near future when it would all start to crumble, given the number of competitors mushrooming in that space. Besides, even its customers are trying to build up capabilities by themselves.
One of the biggest flex companies of NVIDIA chips, Microsoft, has also recently decided to fund another AI chip startup. d-Matrix recently raised $110 million from investors led by Temasek, Playground Global, and Microsoft in a series B funding round. CEO Sid Sheth told Reuters, “This is capital that understands what it takes to build a semiconductor business. They’ve done it in the past. This is capital that can stay with us for the long term.”
Things got even more interesting here. Corsair C8, the company’s compute platform has made a groundbreaking claim that it has the capability to displace NVIDIA H100 GPUs, the industry leading GPU, which is still in shortage according to several reports. Not just by little, d-Matrix claims that there is a 9X increase in throughput compared to H100, and 27 times when compared against A100.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
How are companies planning to displace NVIDIA?
As d-Matrix explains, NVIDIA GPUs are not particularly optimised for running inference tasks for LLMs. A lot of GPUs are needed to handle the AI workloads even after training them, and relying on these high-end GPUs just leads to excessive energy consumption. Paying for H100s and A100s for inference tasks is a little excessive and redundant for companies, and they are increasingly looking for alternatives.
This is not Microsoft’s first attempt to build AI chips. According to reports, the company has been developing Athena AI chips since 2019, which the company and OpenAI had secret access to all this while, and was even tested for running GPT-4. Though there is no direct evidence that these chips were made for training AI models, it seems likely that they were just trying to use them for inference tasks.
However, Microsoft is not the first one to do this. Meta, Google, IBM, and Amazon have all been trying to build their own AI chips to possibly stop their reliance on NVIDIA. Interestingly, all of these companies claim that their chips are a lot better than what NVIDIA can do, but in the end still rely on it the next day.
True. Also, many other NN accelerator chips are also under development.
— Elon Musk (@elonmusk) June 7, 2023
Nvidia will not have a monopoly on large-scale training & inference forever.
According to Elon Musk, a lot of companies are working on AI chips. The only thing is that none of them are actually trying to build an alternative to NVIDIA. There is a new segment of startups that are focused on developing chips for inference. Moreover, Tesla’s Dojo chip is also in the making. There is not enough reason to believe that it would be a competitor to NVIDIA, as Musk has already hoarded thousands of its GPUs for training his own AI model.
NVIDIA is on a different game altogether; it doesn’t care
Ever since 2017, startups have been trying to topple NVIDIA, but failing to make even a dent to it. Startups like Modular, Qyber, MatX, are the latest ones, but even their plans to dethrone NVIDIA AI chips have been diverted to tackling them on the software front, by building alternatives to NVIDIA’s CUDA, the platform for AI computing.
On the other hand, TSMC has said that the shortage of high performance computing (HPC) GPUs might still continue for another 1.5 years. This possibly indicates that NVIDIA might also shift its focus on building less powered GPUs, which means designed for inference and fine-tuning tasks, instead of building models.
Moreover, Microsoft’s investment in d-Matrix hints that the company might be shifting its focus on providing its customers with easier and cheaper inference capabilities, and shifting away from building new models. Even OpenAI’s recent comments about not announcing GPT-5 until November may be saying that they are done with building new models at the moment.
Regardless of all of this, NVIDIA through its DGX Cloud has been offering inference capabilities. But given the price point, NVIDIA might try to drop the price of its server given no one is actually able to compete with them, and continue the monopoly. Moreover, the upcoming L40 GPU is also focused on being a power-efficient inference GPUs. It is clear that the trillion-dollar company knows what it is doing.
To quote NVIDIA chief Jensen Huang on GPU shortage and competition around who gets how much in Silicon Valley, “There’s more, come get them. Everybody should win.” But for the moment, NVIDIA is going to continue its winning streak, even if Microsoft tries to go around it. There is definitely more coming up from NVIDIA, we just have to wait.