MITB Banner

AWS re:Invent was All About Reinventing OpenAI

Also, Microsoft 

Share

Illustration by Nikhil Kumar

Listen to this story

The AWS re:Invent felt like it was all about reinventing OpenAI’s products. “Reinventing is in our DNA, and it continues to drive us every day,” said AWS CEO Adam Selipsky as he began his keynote. Sadly, AWS’s reinventing was mostly about taking shots at OpenAI and calling out its security flaws. 

As Microsoft Azure with OpenAI by its side took the biggest leap in the latest quarter in terms of revenue growth, AWS is trying hard to shrug off the idea that it is lagging behind. At re:Invent, AWS made a slew of announcements spanning from the bottommost layer of AI infrastructure to the topmost layer of AI apps, similar to what Microsoft did at Ignite 2023. 

Interestingly, the tech stack of AWS and Microsoft’s Azure for generative AI is very similar. 

Catching the ‘Q’ Train 

Last week, the letter ‘Q’ out of nowhere started trending on X, all thanks to OpenAI’s latest model, Q*, which some AI experts believe will achieve AGI. Who knows if AWS took inspiration from that and introduced Amazon Q- its newest generative AI assistant, designed for work that can be tailored to business. 

“You can easily chat, generate content, and take actions with Q,” said Selipsky. “It’s all informed by an understanding of your systems, your data repositories, and your operations,” he added, explaining that Amazon Q can be connected to the company’s information repositories, code, data, and enterprise systems.

Interestingly, the functionality of Amazon Q is almost identical to that of OpenAI’s ChatGPT Enterprise and Microsoft’s Copilot Studio. Copilot Studio, built on OpenAI’s models, enables users to create standalone copilots, custom GPTs, add generative AI plugins, and manual topics. It offers precise access controls, data management, user controls, and analytics. 

Furthermore, AWS announced that Agents for Amazon Bedrock are now generally available to customers, looks eerie similar to custom GPTs. 

Focus on Enhancing Inhouse Capabilities

AWS announced that it is building its own LLM models, apart from hosting models from other players like Anthropic, Stability AI, Cohere, AI21 Labs and Meta. 

It introduced three new models to the Titan family, namely Titan Text Lite, Titan Text Express, and Titan Text Embedding Model. Meanwhile, Microsoft also announced an in-house-built open-source model called phi-2. 

Besides LLMs, both Microsoft and AWS have developed their own AI chips. Amazon Web Services (AWS) announced two new AI chips –AWS Graviton4 and  AWS Trainium2.

On a similar note, Microsoft said it is building its very first custom in-house CPU series called Microsoft Azure Cobalt CPU—an Arm-based processor tailored to run general-purpose compute workloads on the Microsoft Cloud. Microsoft has also introduced Maia, a special chip for running AI work on the cloud.

Anthropic is OpenAI’s reinvention  

Anthropic can be considered OpenAI’s reinvention. An interesting aspect is that it originated from OpenAI. Notably, Dario Amodei, chief of Anthropic,invited to re:Invent, began his discussion by referencing his past experiences at OpenAI.

“Anthropic were a set of people who worked at OpenAI for several years. We developed ideas like GPT-3, reinforcement learning from human feedback, scaling laws for language models and some of the key ideas behind the current generative AI. Seven of us left and founded Anthropic” he said. 

Anthropic differentiates itself from OpenAI by saying that its focus is on developing safe and beneficial AGI. Similarly, yesterday AWS attempted to set itself apart from both Microsoft and OpenAI by advocating for Responsible AI and announcing guardrails for Amazon Bedrock. The question remains: ‘Is it enough?’

Meanwhile, OpenAI has reassured its customers about its commitment to data security and privacy for enterprise users. While AWS’s re:Invent was predominantly focused on AWS, there was a subtle sense that Microsoft and OpenAI had a presence at the event.

Share
Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.