Listen to this story
|
The AWS re:Invent felt like it was all about reinventing OpenAI’s products. “Reinventing is in our DNA, and it continues to drive us every day,” said AWS CEO Adam Selipsky as he began his keynote. Sadly, AWS’s reinventing was mostly about taking shots at OpenAI and calling out its security flaws.
As Microsoft Azure with OpenAI by its side took the biggest leap in the latest quarter in terms of revenue growth, AWS is trying hard to shrug off the idea that it is lagging behind. At re:Invent, AWS made a slew of announcements spanning from the bottommost layer of AI infrastructure to the topmost layer of AI apps, similar to what Microsoft did at Ignite 2023.
Interestingly, the tech stack of AWS and Microsoft’s Azure for generative AI is very similar.
Catching the ‘Q’ Train
Last week, the letter ‘Q’ out of nowhere started trending on X, all thanks to OpenAI’s latest model, Q*, which some AI experts believe will achieve AGI. Who knows if AWS took inspiration from that and introduced Amazon Q- its newest generative AI assistant, designed for work that can be tailored to business.
“You can easily chat, generate content, and take actions with Q,” said Selipsky. “It’s all informed by an understanding of your systems, your data repositories, and your operations,” he added, explaining that Amazon Q can be connected to the company’s information repositories, code, data, and enterprise systems.
Interestingly, the functionality of Amazon Q is almost identical to that of OpenAI’s ChatGPT Enterprise and Microsoft’s Copilot Studio. Copilot Studio, built on OpenAI’s models, enables users to create standalone copilots, custom GPTs, add generative AI plugins, and manual topics. It offers precise access controls, data management, user controls, and analytics.
Furthermore, AWS announced that Agents for Amazon Bedrock are now generally available to customers, looks eerie similar to custom GPTs.
Focus on Enhancing Inhouse Capabilities
AWS announced that it is building its own LLM models, apart from hosting models from other players like Anthropic, Stability AI, Cohere, AI21 Labs and Meta.
It introduced three new models to the Titan family, namely Titan Text Lite, Titan Text Express, and Titan Text Embedding Model. Meanwhile, Microsoft also announced an in-house-built open-source model called phi-2.
Besides LLMs, both Microsoft and AWS have developed their own AI chips. Amazon Web Services (AWS) announced two new AI chips –AWS Graviton4 and AWS Trainium2.
On a similar note, Microsoft said it is building its very first custom in-house CPU series called Microsoft Azure Cobalt CPU—an Arm-based processor tailored to run general-purpose compute workloads on the Microsoft Cloud. Microsoft has also introduced Maia, a special chip for running AI work on the cloud.
Anthropic is OpenAI’s reinvention
Anthropic can be considered OpenAI’s reinvention. An interesting aspect is that it originated from OpenAI. Notably, Dario Amodei, chief of Anthropic,invited to re:Invent, began his discussion by referencing his past experiences at OpenAI.
“Anthropic were a set of people who worked at OpenAI for several years. We developed ideas like GPT-3, reinforcement learning from human feedback, scaling laws for language models and some of the key ideas behind the current generative AI. Seven of us left and founded Anthropic” he said.
Anthropic differentiates itself from OpenAI by saying that its focus is on developing safe and beneficial AGI. Similarly, yesterday AWS attempted to set itself apart from both Microsoft and OpenAI by advocating for Responsible AI and announcing guardrails for Amazon Bedrock. The question remains: ‘Is it enough?’
Meanwhile, OpenAI has reassured its customers about its commitment to data security and privacy for enterprise users. While AWS’s re:Invent was predominantly focused on AWS, there was a subtle sense that Microsoft and OpenAI had a presence at the event.