Meta AI’s MultiRay Attempts to Increase Efficiency of Large-scale AI Models

MultiRay is a universal model supporting multiple large-scale AI applications
Listen to this story

MetaAI researchers recently released MultiRay, a new platform for running large-scale AI systems. 

This new platform solves the issue with current AI models which compute large amounts of data to produce results. In its current state, each AI model is trained to perform particular tasks, which while generating high-quality output, requires huge operational costs. 

However, Meta’s new platform majorly cuts down on processing costs by allowing multiple models to run on its platform. The universal models are trained to perform functions across varying tasks and domains, producing results with better quality than one’s performed by specialised models. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Read: Meta Unveils CICERO to Achieve Human-level Performance at Diplomacy

Teams across Meta are using MultiRay to improve and iterate on ML models for wide-ranging applications. TextRay, the first model among many, has been in production since 2020, and will be used for several text understanding applications that include detecting inauthentic content, identifying hate speech, and enhancing users’ search experience.  

Similarly, PostRay, their second model, combines elements of text and image understanding, and is used for applications like topic classification, which is used in Reels. 

However, instead of combining multiple models—for text, image, and videos—into one large model, MultiRay utilises large foundational models to represent an input that can be applied to multiple task-specific models. The single input that will run many of these models will be quite large, so as to convey more information. 

Read: Why Meta Took Down its ‘Hallucinating’ AI Model Galactica?

The company-wide computation run by the large foundational model is therefore centralised and executed on accelerations like GPUs, with cache used to save spending on recomputation as much as possible. At the moment, the platform is powering over 125 use cases across Meta, and supports up to 20 million queries per second, while being able to serve 800 billion queries per day. 

More Great AIM Stories

Ayush Jain
Ayush is interested in knowing how technology shapes and defines our culture, and our understanding of the world. He believes in exploring reality at the intersections of technology and art, science, and politics.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM