Why would cloud companies like Microsoft, Google and Amazon be interested in designing their own chips? The latest news about Microsoft foraying into artificial intelligence chip design for the cloud comes close on the heels of the burgeoning AI cloud market which is poised for a tenfold growth, as predicted by IDC. AI and cloud are intertwined, where cloud computing has a direct impact on strengthening AI capabilities. It’s not just Microsoft; Facebook also posted about job openings for ASIC and FPGA chip designers in April this year. Now Google is gearing up to release the third generation of TPU 3.0 which is set to be more than eight times more powerful and will deliver more than 100 petaflops of machine learning hardware acceleration. Now, cloud giants are doubling down on their own cloud chip efforts and focusing on specialised hardware to dominate the cloud infrastructure ecosystem.
As the cloud market gets more competitive, Satya Nadella-led Microsoft is doubling down on AI and AI-driven hardware. Now, this brings us to an interesting question — if cloud software giants AWS, Microsoft and Google develop custom silicon, where does that leave traditional chip makers?
In fact, Microsoft’s Dr Doug Burger revealed in a podcast that advances in AI and deep ML have placed a new demand on current hardware and computer architecture requirements, pushing technology companies in a post-CPU, post-von-Neumann computing world.
In this article, we will take a look at how Microsoft has the financial and tech muscle to dive into custom silicon for the cloud. Also, given the intense competition, it makes more sense for the Redmond-based giant to upgrade for a new generation of AI applications. Almost all major companies are trying to create customised AI hardware to handle their AI applications.
Last year Google released a paper that threw light on TPU design, architecture and performance benchmarks against chip giants NVIDIA’s K80 GPU and Intel’s Haswell CPU. In terms of performance, TPUs fared 15 to 30 times faster than other GPU/CPUs. Besides higher performance, Google’s TPUs also met the strict latency limits.
Given these advancements, where does this leave chipmakers NVIDIA, Intel and AMD that have invested millions of dollars for end-to-end platforms for driving autonomous technology, gaming and other applications? Another news report suggested that traditional chip giants, especially NVIDIA and Intel will soon face stiff competition in terms of chip design from cloud software giants.
Microsoft’s Commitment On ‘AI For Cloud’
FPGA Provides Faster AI Processing In Azure: Harry Shum, Microsoft’s executive vice president of AI and Research, had hinted earlier this year that AI-driven hardware was on the cards for the company. He had also said that they would soon be taking AI capabilities across the board. Now, it is common knowledge that Microsoft is building accelerators based on the Field Programmable Gate Arrays for deep learning. The company has been betting big on FPGAs and they have been deployed across the Azure servers, creating a cloud that can be reconfigured to optimise a diverse set of applications and functions. For example, Microsoft’s search engine Bing uses Azure FPGA platform for drumming up intelligent answers and auto-generated summaries. In fact, earlier last year, Microsoft previewed Project Brainwave integrated with Azure Machine Learning in a bid to make Azure the most efficient cloud computing platform for AI.
98 Percent Revenue Growth From Azure: AI cloud has created SaaS business models for cloud companies, which are charging per API call. It has also changed the game of scalability and democratised AI. Reports have suggested that the AI software platforms market is expected to cross $8 billion in revenue and grow at a CAGR of 39 percent. Since Nadella took over the reins in 2014, Microsoft has bet big on the cloud as the growth engine, and the company’s commercial cloud revenue grew exponentially, with Azure growth pegged at 98 percent, reports hint.
Google Cloud Grows Bigger, Edges AWS And Microsoft: According to a newly-published 2018 Anaconda State of Data Science report, Google Cloud clinched the top spot as the most popular cloud provider for data services, surpassing cloud giant AWS and Microsoft Azure. In terms of market size, Google Cloud is the third-largest cloud provider, however, Mountain View giant’s efforts with Google Cloud ML has helped the company edge heavyweight AWS and Microsoft.
AI, The New Engine For Growth, Nadella Pitches: In a recent much-discussed March memo by Nadella, he hinted at organisational reshuffling and also announced two new engineering teams. He also mentioned AI 18 times. As part of the reshuffle, the company announced a slew of leadership changes, sharing how Scott Guthrie, executive vice president of Cloud and Enterprise has an expanded role and would lead a new team of Cloud + AI Platform. Guthrie’s team is tasked with driving platform coherence across all layers of the tech stack from distributed computing fabric (cloud and edge) to AI that includes infrastructure, runtimes, frameworks, tools and more.
Register for our upcoming events:
- Join the Grand Finale of Intel Python HackFury2: 21st Oct, Bangalore
- WEBINAR: HOW TO BEGIN A CAREER IN DATA SCIENCE | 24th Oct
- Machine Learning Developers Summit 2020: 22-23rd Jan, Bangalore | 30-31st Jan, Hyderabad
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.