MITB Banner

Where is AWS in the AI Arms Race?

As Microsoft and Google fight over search, AWS is serving enterprise needs
Share
Listen to this story

The AI battle is heating up between Microsoft and Google, but AWS is conspicuously missing from the fray. The cloud services giant recently released a competing model or a framework to OpenAI’s GPT-3.5 and Google’s LaMDA, termed Multimodal CoT or mm-cot for short. This model performed better than GPT-3.5 by 16% on the ScienceQA benchmark, and used the chain of thought method to achieve more human-like reasoning capabilities.

AWS was one of the first companies to bring AI to the enterprise sector, launching their suite of AI services in 2016. Today, they have been overtaken by competitors Azure and GCP in terms of the diversity of AI cloud services. AI is definitely the future of enterprise tech, so let’s delve deeper into where AWS is in the AI arms race.

Hardware Race

AWS began its foray into custom silicon with chips for AI inference and CPU-bound workloads. After acquiring chipmaker Annapurna Labs in January 2015 for $350 million, AWS launched Graviton in 2017 to bring ARM chips to the enterprise and serve customer’s needs better. 

David Brown, then VP of Amazon EC2, said in an interview during the Graviton launch, “A certain percentage of the processor is used by the virtualisation stack and a whole lot of other management software. What we noticed was apart from reducing the efficiency…we also weren’t able to get the sort of performance we believed our customers were looking for.”

Graviton was a huge success story for AWS, to the point where it warranted multiple iterations and upgrades. Graviton is currently in its third iteration, which provides 3x better performance for ML workloads, and is used by AI researchers and enterprises alike for MLOps in the cloud. 

Seeing the market for dedicated AI hardware, AWS got to work creating a machine learning inference chip, resulting in the launch of Inferentia in late 2018. Currently in its 2nd iteration, Inferentia2 is an accelerator chip that speeds up inference for AI models, and can scale with other accelerators to accommodate the biggest models. 

Each chip supports up to 190 teraflops of FP16 performance — on par with some of NVIDIA’s most powerful GPUs like the H100. The chip also has a large amount of on-chip memory, up to 32 GB of high-bandwidth memory per accelerator, making it suitable for even the largest AI workloads. 

Even as their strategy seems lacking, Amazon’s customer-focussed approach is not to be dismissed, as they can pivot within a short period of time to serve market needs. This was seen from the launch of Amazon CodeWhisperer, a GitHub Copilot competitor released as a way to keep developers within AWS’ ecosystem. 

Software & Research

AWS offers a host of ready-made AI services for enterprise workflows, covering areas such as computer vision, data extraction, language processing, business metrics, and coding. AWS’ SageMaker platform has also been instrumental in facilitating the rise of MLOps in the enterprise, allowing developers to build, deploy, and scale AI models on the cloud. 

Both Azure and GCP also offer competing AI services in the same verticals as AWS, but Microsoft’s OpenAI service gives it an edge over its competitors due to the addition of GPT 3.5 and Codex. Google’s Vertex AI is also trying to wrest away market share from SageMaker for model deployment on the cloud. However, one thing that AWS lacks is the presence of a dedicated in-house AI research team. 

While Amazon currently conducts research through Amazon Science, AI and ML is only a single touchpoint for the team, as they also conduct research on robotics, quantum technology, economics, and more. In contrast, Microsoft relies on OpenAI’s research for cutting-edge AI solutions, and GCP uses Google Research and DeepMind to solve problems using AI. 

Research is one of the most integral parts of getting ahead of the competition when it comes to AI: a point both Microsoft and Google have proved time and again. To date, these tech giants present more papers than Amazon, with Microsoft publishing over 30 AI-focused papers this year alone, and Google publishing over 40 papers in the same domain. Amazon, by contrast, has only published 16 research papers in the field of machine learning, with a further eight in conversational AI. 

Even as AWS established a first-mover advantage in the enterprise AI services field, they will slowly fall behind if they do not keep ahead of the curve with research and better software offerings. While industry players are still reluctant to adopt AI and ML, as seen by this study, the window for opportunity to adopt the technology is quickly closing. This further increases the need for companies like AWS to provide better AI services. 

Also to be noted is AWS’ approach to releasing their AI solutions to the world. While OpenAI releases API access to their models and Google releases nothing, AWS conducts research into AI to solely integrate it into more services on their platform. This could also shift their research focus away from solving computer science’s biggest problems to a more industry-focused research path to create new services. 

The failure to adapt to cutting-edge AI trends might not hurt AWS’ bottomline today, but will make them less competitively viable in the future. However, it seems that AWS has a trick up its sleeve when it comes to AI dominance – custom hardware. 

While AWS’ lineup for AI services and research is falling behind when compared to other cloud services like GCP and Azure, the combination of market leadership along with custom hardware might give it the edge when it comes to hosting AI in the cloud. 

PS: The story was written using a keyboard.
Share
Picture of Anirudh VK

Anirudh VK

I am an AI enthusiast and love keeping up with the latest events in the space. I love video games and pizza.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India