GPU Vs CPU: The Future Think-Tanks Of AI

gpu

gpu

There has been quite a stellar improvement in the field of computer graphics. It’s not the 2000s anymore, with each year being progressively faster in the domain of computing. Now, this field has taken a plunge into the sea of Artificial Intelligence (AI). This is the time when many contrasting opinions about why Graphics Processing Units (GPU) are preferred in the field of AI instead of Central Processing Unit (CPU) or the other way round are discussed. This article explains why it makes a difference.

The GPU Foothold:

Nvidia, the most popular GPU and processor manufacturer is already ahead with its parallel computing techniques. It has made its way into Machine Learning (ML) specifically with the Deep Learning. Thanks to data scientists across the world, there are many areas of deep learning such as back-propagation, Natural Language Processing (NLP) and Artificial Neural Networks (ANN), among others, which are advancing gradually and are already catching up with traditional technologies.  

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Deep Learning uses non-linear processing units and ANN for data retrieval and conversion, wherein the output generated in the preceding abstraction layer serves as the input for the successive abstraction layer. This way the processing power is distributed evenly. Although, CPUs fare good in this aspect, they are of no match when it comes to the processing power of its counterparts , the GPUs. This is because it contains massive multi-parallel cores — Nvidia boasts of having more than 3500 cores in its processors while Intel’s CPUs flagship processors have a maximum of upto 30 cores — and that is why graphics cards are becoming more expensive.

Features comparison


Download our Mobile App



Features/ Attributes GPU CPU
Computing Capability High Low
Core complexity Simple Composite
Number of Cores 100  to 4000 4 to 30
Performance Built for parallel computing, ideal for ML Built to perform sequential operations.
Graphics rendering 1 to 2 milliseconds/ image (even lesser) 1 to 5 seconds/image
Core efficiency 1 to 5 Tera-Flops 100 to 500 Giga-Flops
Latest additions Nvidia’s Titan V, Tesla series and GTX 1050 series (expected soon) Intel’s CoreTM i7-8700K Series

 

The Rise Of GPUs:

Back in the 90s, GPUs were specifically designed, and limited to desktop gaming. In fact, GPUs were completely optional when someone had to buy a computer. The focus gradually shifted from gaming to high-resolution imagery and even to AI in the late 2011. The advancements in extremely low-power technology can be also be cited as one of the reasons for the growth and development of GPUs.

For example, consider the Google Brain project, an early deep learning experiment started in 2011, which analysed millions of images from YouTube to identify cats. The graphical capability combined with their computational processing counterparts was handled by their commercial computers pretty well, which made the experiment a success and garnered media attention. Later on, in 2016 they conducted another experiment for encrypting communications by using a set of AI systems which were fed with instructions to interact with each other, by incorporating cryptography and image processing. The response was positive — thus concluding that AI systems developed their own encryption and decryption systems all along the process.

In light of the above instances, we can see that the graphical requirements started getting higher and higher in the coming years, due to large volumes of images as well their growing image pixel criteria leading to higher resolution images. Thus, GPUs were developed and manufactured on a large-scale as demand for higher graphical processing power began to emerge.

As of 2017, Nvidia has come up with their recent upgrade in their Tesla series of GPUs namely, K40 and K20 processors, which they say will perform upto 5 tera floating point operations per second (Tflops) with a combined memory capacity of upto 12GB RAM. These processors come with more than 2,500 cores, which should be more than sufficient to tackle ML algorithms at the advanced level.    

Now when it comes to deep learning, the task involves complex and enormous mathematical computations. Developing a self-aware, self-sufficient system identical to the human brain is the core idea behind AI complemented with deep learning. This is where GPUs come into play. They categorise not just graphic content but also text and numbers to relay the captured data from them to serve as a framework for automating information into machines.

How Did CPUs Fall Behind?

On the other hand, CPUs are primarily associated just with computation devoid of any graphical criteria. For example, if you try to run a graphic-intensive video game on a computer, the performance will be sluggish, undesirable or sometimes the game might not even run because the CPU of that computer is limited to perform smoothly only on standard operations such as working on spreadsheet software, browsing the internet etc. Also, CPUs handle tasks in a sequence, which makes them a bit slow to process neural networks, which is a hindrance to parallel processing. This reason is the sole factor why GPUs are used for AI development.

Limitations In GPU:

The drawback of GPUs may be its alignment with the ML software operations. The hardware should always compensate for the software. Cloud service providers (Google Cloud and Amazon AWS, for example) and GPU manufacturers (Nvidia, AMD) are changing to compensate with the ML needs. However, the aggressive push towards better servers for data storage, and algorithms for faster processing, will always be present.

Conclusion:

There will always be a strong argument for feasibility among GPUs and CPUs. Contrastingly, the truth is, processor and chipset manufacturers such as Intel and AMD, couple GPU and CPU for optimal RAM management in devices. In the field of AI, the previous fact may not hold good mainly due to performance issues. In the end, performance and speed is all that matters.

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: Evolution of Data Science: Skillset, Toolset, and Mindset

In my opinion, there will be considerable disorder and disarray in the near future concerning the emerging fields of data and analytics. The proliferation of platforms such as ChatGPT or Bard has generated a lot of buzz. While some users are enthusiastic about the potential benefits of generative AI and its extensive use in business and daily life, others have raised concerns regarding the accuracy, ethics, and related issues.