AIM Banners_978 x 90

GPUs vs TPUs in AI – The Real Battle

While ChatGPT and Bard fight for their tech giant overlords, GPUs and TPUs work overtime to keep them running
As ChatGPT and Bard slug it out, two behemoths work in the shadows to keep them running – NVIDIA’s CUDA-powered GPUs (Graphic Processing Units) and Google’s custom-built TPUs (Tensor Processing Units). In other words, it’s no longer about ChatGPT vs Bard, but TPU vs GPU, and how effectively they are able to do matrix multiplication.  Why Models Should be Optimised Training costs are one of the biggest barriers to creating a large model. AI compute is generally calculated in compute/GPU hours, which represents the time it takes for a model to be trained. Another method, termed petaflops/s-day is also used. 1 pf-day consists of compute nodes performing 10^15 (or a petaflop) of operations per second for a whole day.  For context, the largest version of GPT-3 consist
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Anirudh VK
Anirudh VK
I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed