MITB Banner

Apple Finally Unveils MM1, a Multimodal Model for Text and Image Data 

"I have not seen this level of details from a big tech's whitepaper for a very, very long time. Apple's so back!"said Jim Fan.

Share

Apple Antitrust
Listen to this story

Apple researchers have developed a family of large multimodal language models called MM1, which can process and generate both text and visual data, according to a research paper presented last week. The study at Apple’s research labs aimed to build performant multimodal large language models (MLLMs) through careful ablation of various architectural components, data sources, and training procedures.

Read the paper here

The researchers found that image resolution and the capacity of the visual encoder had the highest impact on model performance, while the specific method of combining visual and text data mattered less. 

They also discovered that a careful mix of different data types was crucial, with interleaved image-text documents helping with few-shot learning, traditional captioned images boosting zero-shot performance, and including text-only data maintaining strong language understanding capabilities.

Based on these insights, the team developed the MM1 model family, ranging from three billion to 30 billion parameters, including dense and mixture-of-experts variants. After scaling up training, MM1 achieved state-of-the-art results on various multimodal benchmarks during pre-training.

Following further instruction tuning on a curated 1 million example dataset, the final MM1 models demonstrated competitive performance across 12 multimodal tasks, such as visual question answering and captioning. Notably, MM1 could perform multi-image reasoning and few-shot learning, critical capabilities enabled by the team’s careful multimodal pre-training approach.

This paper builds upon previous research into areas like CLIP for learning visual representations from natural language supervision, and autoregressive models like GPT for text generation. However, it is one of the first detailed studies focused specifically on large-scale multimodal pre-training.

The researchers hope their insights will accelerate progress, as Apple is reportedly in talks to integrate Google’s Gemini generative AI models into upcoming iPhone software. 

Share
Picture of K L Krithika

K L Krithika

K L Krithika is a tech journalist at AIM. Apart from writing tech news, she enjoys reading sci-fi and pondering the impossible technologies, trying not to confuse it with reality.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.