Microsoft Launches Supercomputer To Train Large AI Models, In Partnership With OpenAI

Microsoft Launches Supercomputer To Train Large AI Models, In Partnership With OpenAI

Microsoft has recently announced that it has built one of the top five publicly disclosed supercomputers in the world to train extremely large artificial intelligence models, in partnership with OpenAI

The company has announced the launch of its supercomputer at its virtual Build developers conference and stated that the infrastructure would be available on Azure.

According to the company AI blog post by Jennifer Langston, the supercomputer, hosted in Azure, will be used to train OpenAI’s artificial intelligence models specifically. This initiative showcases a key milestone in the partnership between OpenAI and Microsoft to jointly create new supercomputing technologies in Azure.

OpenAI, with current CEO Altman, was founded in 2015 and is currently working towards creating artificial general intelligence that can outperform humans. According to Open AI Charter, the company has a mission “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.”

It further stated that “We will attempt to build safe and beneficial AGI directly, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

Large-scale AI models are beginning to learn in new ways across text, images and video. Art by Craighton Berman.

According to the blog post, the new supercomputer that has been developed for OpenAI is a system with more than 285,000 CPU cores, 10,000 graphics processing units, and 400 gigabits per second of connectivity for each GPU server. The firm also claims that, compared to other machines listed on the Top500 supercomputers in the world, Microsoft’s this launch stands in top five.

The company further stated that it’s an initiation towards making the next-gen of very large artificial intelligence models and the advanced infrastructure that is required to train them and make them available as a platform for other organisations and developers to build upon.

Microsoft’s CTO Kevin Scott said, “The exciting thing about these models is the breadth of things they’re going to enable.”

He further stated that “This is about being able to do a hundred exciting things in NLP at once and a hundred exciting things in computer vision. And when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now,” he said.

The partnership of Microsoft and OpenAI, since last year, has been working towards commercialising new AI technologies, and therefore launched this new supercomputer. Post the collaboration, Microsoft has poured in $1 billion into the project where the two firms are building Azure AI supercomputing technologies.

The company is aiming to make its large AI models, and train optimisation tools and supercomputing resources that are available through Azure AI services and GitHub. Therefore, developers, data scientists, as well as business customers can easily leverage the power of artificial intelligence at Scale, stated the blog post. The company is also providing separate AI capabilities to its business customers that don’t need one of the most powerful computers on Earth.

“For customers who want to push their AI ambitions but who don’t require a dedicated supercomputer, Azure AI provides access to powerful computers with the same set of AI accelerators and networks that also power the supercomputer. Microsoft is also making available the tools to train large AI models on these clusters in a distributed and optimised way,” stated the blog post.

The company further stated that it is making its Microsoft Turing AI models available to business customers, which provide access to the same models that Microsoft itself uses to power language understanding in its own Microsoft 365 products.

The Microsoft Turing model for natural language generation is now publicly available for AI models with 17 billion parameters. According to the company, this new class of models learns differently than supervised learning models that rely on meticulously labelled human-generated data to teach an AI system to recognise a cat or determine whether the answer to a question makes sense, also known as “self-supervised” learning. 

In “self-supervised” learning, AI models can learn from large amounts of unlabeled data. For example, models can learn deep nuances of language by absorbing large volumes of text and predicting missing words and sentences. Art by Craighton Berman.

These AI models can self-learn about language by examining billions of pages of publicly available documents on the internet, self-published books, instruction manuals, history lessons, human resources guidelines. 

If trained properly, according to Microsoft, the model can also summarise a lengthy speech, moderate content in live gaming chats, find relevant passages across thousands of legal files or even generating code from scouring GitHub.

Download our Mobile App

Sejuti Das
Sejuti currently works as Associate Editor at Analytics India Magazine (AIM). Reach out at sejuti.das@analyticsindiamag.com

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR