Researchers from Google and DeepMind have developed Med-Gemini, a new family of highly capable multimodal AI models specialised for medicine. The paper, published yesterday, builds upon the Gemini 1.0 and 1.5 models released in 2023, which demonstrated breakthrough capabilities in language, multimodal understanding, and long-context reasoning.
The paper stated that, “Med-Gemini inherits Gemini’s foundational capabilities in language and conversations, multimodal understanding, and long-context reasoning.”
The model brings new possibilities for AI in medicine, such as assisting with complex diagnostic challenges, engaging in multimodal medical dialogue, and processing lengthy electronic health records.
The researchers specialised the Gemini models for medicine using techniques like self-training with web search integration, multimodal fine-tuning, and customised encoders.
To evaluate Med-Gemini’s performance, the researchers tested the models on a comprehensive suite of 25 tasks across 14 medical benchmarks. The results were impressive, with Med-Gemini establishing new state-of-the-art performance on 10 benchmarks. On the MedQA benchmark, which assesses medical question-answering abilities, Med-Gemini achieved an accuracy of 91.1%, surpassing the previous best by 4.6%. In multimodal tasks, the models outperformed GPT-4 by an average of 44.5%.
Beyond benchmarks, Med-Gemini demonstrates potential for real-world utility. The models outperformed human experts on tasks such as medical text summarisation and referral letter generation. Additionally, Med-Gemini showcased impressive long-context processing abilities on challenging tasks like needle-in-a-haystack retrieval from extensive health records.
“The unique nature of medical data and the critical need for safety demand specialised prompting, fine-tuning, or potentially both along with careful alignment of these models,” the paper explained.
“For language-based tasks, we enhance the models’ ability to use web search through self-training and introduce an inference time uncertainty-guided search strategy within an agent framework. This combination enables the model to provide more factually accurate, reliable, and nuanced results for complex clinical reasoning tasks.”
Med-Gemini’s multimodal capabilities allow the models to process and analyse a wide range of medical data, including text, images, videos, and even raw sensor inputs like electrocardiograms (ECGs).
The researchers demonstrate Med-Gemini’s ability to engage in multimodal medical dialogues, where the models can request additional information, such as images, when needed and provide explanations for their reasoning. These capabilities highlight the potential for AI to support more natural and comprehensive interactions between healthcare providers and patients.
Google has been a pioneer of AI developments in Healthcare with multiple models in the field like Med-PaLM 2, AlphaFold, Flan-PaLM etc.