MITB Banner

GPU Vs CPU: The Future Think-Tanks Of AI

Share

gpu

gpu

There has been quite a stellar improvement in the field of computer graphics. It’s not the 2000s anymore, with each year being progressively faster in the domain of computing. Now, this field has taken a plunge into the sea of Artificial Intelligence (AI). This is the time when many contrasting opinions about why Graphics Processing Units (GPU) are preferred in the field of AI instead of Central Processing Unit (CPU) or the other way round are discussed. This article explains why it makes a difference.

The GPU Foothold:

Nvidia, the most popular GPU and processor manufacturer is already ahead with its parallel computing techniques. It has made its way into Machine Learning (ML) specifically with the Deep Learning. Thanks to data scientists across the world, there are many areas of deep learning such as back-propagation, Natural Language Processing (NLP) and Artificial Neural Networks (ANN), among others, which are advancing gradually and are already catching up with traditional technologies.  

Deep Learning uses non-linear processing units and ANN for data retrieval and conversion, wherein the output generated in the preceding abstraction layer serves as the input for the successive abstraction layer. This way the processing power is distributed evenly. Although, CPUs fare good in this aspect, they are of no match when it comes to the processing power of its counterparts , the GPUs. This is because it contains massive multi-parallel cores — Nvidia boasts of having more than 3500 cores in its processors while Intel’s CPUs flagship processors have a maximum of upto 30 cores — and that is why graphics cards are becoming more expensive.

Features comparison

Features/ Attributes GPU CPU
Computing Capability High Low
Core complexity Simple Composite
Number of Cores 100  to 4000 4 to 30
Performance Built for parallel computing, ideal for ML Built to perform sequential operations.
Graphics rendering 1 to 2 milliseconds/ image (even lesser) 1 to 5 seconds/image
Core efficiency 1 to 5 Tera-Flops 100 to 500 Giga-Flops
Latest additions Nvidia’s Titan V, Tesla series and GTX 1050 series (expected soon) Intel’s CoreTM i7-8700K Series

 

The Rise Of GPUs:

Back in the 90s, GPUs were specifically designed, and limited to desktop gaming. In fact, GPUs were completely optional when someone had to buy a computer. The focus gradually shifted from gaming to high-resolution imagery and even to AI in the late 2011. The advancements in extremely low-power technology can be also be cited as one of the reasons for the growth and development of GPUs.

For example, consider the Google Brain project, an early deep learning experiment started in 2011, which analysed millions of images from YouTube to identify cats. The graphical capability combined with their computational processing counterparts was handled by their commercial computers pretty well, which made the experiment a success and garnered media attention. Later on, in 2016 they conducted another experiment for encrypting communications by using a set of AI systems which were fed with instructions to interact with each other, by incorporating cryptography and image processing. The response was positive — thus concluding that AI systems developed their own encryption and decryption systems all along the process.

In light of the above instances, we can see that the graphical requirements started getting higher and higher in the coming years, due to large volumes of images as well their growing image pixel criteria leading to higher resolution images. Thus, GPUs were developed and manufactured on a large-scale as demand for higher graphical processing power began to emerge.

As of 2017, Nvidia has come up with their recent upgrade in their Tesla series of GPUs namely, K40 and K20 processors, which they say will perform upto 5 tera floating point operations per second (Tflops) with a combined memory capacity of upto 12GB RAM. These processors come with more than 2,500 cores, which should be more than sufficient to tackle ML algorithms at the advanced level.    

Now when it comes to deep learning, the task involves complex and enormous mathematical computations. Developing a self-aware, self-sufficient system identical to the human brain is the core idea behind AI complemented with deep learning. This is where GPUs come into play. They categorise not just graphic content but also text and numbers to relay the captured data from them to serve as a framework for automating information into machines.

How Did CPUs Fall Behind?

On the other hand, CPUs are primarily associated just with computation devoid of any graphical criteria. For example, if you try to run a graphic-intensive video game on a computer, the performance will be sluggish, undesirable or sometimes the game might not even run because the CPU of that computer is limited to perform smoothly only on standard operations such as working on spreadsheet software, browsing the internet etc. Also, CPUs handle tasks in a sequence, which makes them a bit slow to process neural networks, which is a hindrance to parallel processing. This reason is the sole factor why GPUs are used for AI development.

Limitations In GPU:

The drawback of GPUs may be its alignment with the ML software operations. The hardware should always compensate for the software. Cloud service providers (Google Cloud and Amazon AWS, for example) and GPU manufacturers (Nvidia, AMD) are changing to compensate with the ML needs. However, the aggressive push towards better servers for data storage, and algorithms for faster processing, will always be present.

Conclusion:

There will always be a strong argument for feasibility among GPUs and CPUs. Contrastingly, the truth is, processor and chipset manufacturers such as Intel and AMD, couple GPU and CPU for optimal RAM management in devices. In the field of AI, the previous fact may not hold good mainly due to performance issues. In the end, performance and speed is all that matters.

Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.