MITB Banner

5 AI Models of 2023 that will Transform the Medical Landscape 

These cutting-edge technologies are revolutionizing patient care, diagnostics, and treatment, and are paving the way for a healthier tomorrow. Check out our comprehensive guide to the top 5 AI models that are making waves in the medical industry.

Share

Listen to this story

Large language models have enabled us to use AI to solve many a real-world problems. However, using AI in the medical field is a different ball-game altogether as it requires us to prioritize safety, equity, and fairness. In 2023, various foundational models in the medical domain remained at the forefront of advancements. 

Here are top 5 models that promise significant impact on healthcare practices.

Med-PaLM 2 

Med-PaLM is a language model that uses artificial intelligence to answer medical questions with high accuracy. It has been specifically designed and tested for the medical domain, using medical exams, research, and consumer queries. The latest version of Med-PaLM, called Med-PaLM 2, was unveiled at Google Health’s annual event in March 2023.

This model has an impressive accuracy rate of 86.5% on USMLE-style questions and can provide comprehensive and accurate answers to consumer health questions. Limited testing of Med-PaLM 2 will be conducted soon to explore potential use cases and gather feedback.

AlphaFold 2.3

AlphaFold, a cutting-edge AI system developed by DeepMind, has the ability to predict protein structures computationally with unparalleled accuracy and speed. In collaboration with EMBL’s European Bioinformatics Institute (EMBL-EBI), they have made available more than 200 million AlphaFold-generated protein structure predictions that are openly accessible to the scientific community worldwide. 

There are predictions including almost all known cataloged proteins – offering the potential to significantly expand the knowledge of biology. AlphaFold is an AI-based protein-folding solution recognized by the Critical Assessment of Protein Structure Prediction (CASP) community. CASP challenges teams to predict protein structures using amino acid sequences for proteins with known 3D shapes.

Bioformer

Pretrained language models like Bidirectional Encoder Representations from Transformers (BERT) have shown impressive results in natural language processing (NLP) tasks. Recently, BERT has been adapted for the biomedical domain. However, these models have a high number of parameters, making them computationally expensive for large-scale NLP applications. 

The creators of BERT hypothesized that reducing the number of parameters would not significantly affect its performance therefore, they developed Bioformer, which is a compact BERT model specifically designed for biomedical text mining. Bioformer uses a biomedical vocabulary and was pre-trained from scratch on PubMed abstracts and PubMed Central full-text articles. 

The creators trained two Bioformer models – Bioformer8L and Bioformer16L – which reduced the model size by 60% compared to BERTBase.

RoseTTAFold All-Atom

RoseTTAFold is an accurate deep-learning program that models protein structures. It was designed for biomolecules made entirely of amino acids. In 2023, the new upgrade called RoseTTAFold All-Atom was introduced. With this upgrade, the program can model full biological assemblies that contain different types of molecules, including proteins, DNA, RNA, small molecules, metals, and other bonded atoms, including covalent modifications of proteins.

This upgrade is significant because proteins usually interact with other non-protein compounds to function correctly. With RoseTTAFold All-Atom, scientists can model how proteins and small-molecule drugs interact. This capability may be beneficial for drug discovery research.

ChatGLM-6B

It’s believed that training and deploying a dialogue model for hospitals is not feasible, which has hindered the use of LLMs in the medical industry. To address these issues, the developers collected databases of medical dialogues in Chinese with the help of ChatGPT and have used several techniques to train an easy-to-deploy LLM. Notably, the developers were able to fine-tune the ChatGLM-6B on a single A100 80G in just 13 hours, making it very affordable to have a healthcare-purpose LLM. 

ChatGLM-6B generates an- swers that are aligned with human preference. Furthermore, we use low-rank adaptation (LoRA) to finetune ChatGLM with only 7 million trainable parameters. The fine-tuning process using all Chinese medical dialogue dataset was conducted using an A100 GPU for a du- ration of 8 hours.

Share
Picture of Tannista Basak

Tannista Basak

Tannista is a tech journalist who has a keen interest in AI, Machine Learning and Data Science. A graduate of Journalism she also has an interest in art, photography and filmmaking
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.