Listen to this story
|
Following the recent developments in large language models using Transformers, an attention-based mechanism developed by Google in 2017, Microsoft released its research paper, Language Is Not All You Need: Aligning Perception with Language Models. The model introduces a multimodal large language model (MLLM) called Kosmos-1.
The paper talks about the importance of integrating language, action, multimodal perception, and world modelling for stepping towards AGI. The research explores Kosmos-1 in various settings like zero-shot, few-shot, and multi-modal chain of thoughts, on several tasks without fine-tuning or gradient updates.
Check out the research paper here.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
The model shows promising capabilities on various generation tasks by perceiving general modalities such as OCR-free NLP, visual QA, and perception-language tasks, and vision tasks.

The Microsoft research team also introduced the model to a dataset of Raven IQ test for analysing and diagnosing the non-verbal reasoning capabilities of MLLMs.
Below is the example of multimodal chain-of-thought prompting. This enables the model to deal with complex questions and reasoning tasks, by generating a rationale before tackling the problem.
The team believes that moving from LLMs to MLLMs is better for achieving new capabilities and opportunities for language and multimodal tasks.
Though it hasn’t been updated yet, you can check out the Github repository here for future updates.