Advertisement

MIT Releases New Framework For Machines To Work As Radiologist

In order to improve machine learning algorithms' interpretive abilities, scientists explore underused radiology reports that accompany medical images.

MIT Computer Science & Artificial Intelligence Lab (CSAIL), United States, employs an underused resource to help machine learning algorithms better analyse medical images; radiology reports are included with the images. 

According to MIT News, accurately evaluating an X-ray or a medical image is critical to a patient’s health and may even save a life. Due to the fact that obtaining such an examination is contingent on the availability of a trained radiologist, a speedy reaction is not always possible.

Ruizhi Ray Liao, a postdoctoral researcher at MIT’s CSAIL, said, “Our goal is to teach machines capable of recreating what radiologists perform on a daily basis.”

While the concept of using computers to interpret images is not new, the MIT-led team is utilising a previously underutilised resource — the vast body of radiology reports that accompany medical images and are written by radiologists in routine clinical practice — to enhance the interpretive capabilities of machine learning algorithms. Additionally, the team is leveraging a notion from information theory called mutual information — a statistical measure of the interdependence of two distinct variables — to bolster their approach’s success.

The following is how it works: 

  • To begin, a neural network is taught to detect the extent of a disease, such as pulmonary oedema, by presenting it with a large number of X-ray pictures of patients’ lungs, as well as a doctor’s severity rating for each instance. 
  • That information is contained within a series of numbers. Text is represented by a distinct neural network, which uses a different set of integers to represent its information. 
  • The information from images and text is then integrated by a third neural network in a coordinated approach that maximises the mutual information between the two datasets.

Polina Golland, a principal investigator at CSAIL, stated that “When the reciprocal information between images and text is high, images are highly predictive of the text, and the text is highly predictive of the images.”

The work was supported by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, Wistron, the MIT-IBM Watson AI Lab, the MIT Deshpande Center for Technological Innovation, the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic), and the MIT Lincoln Lab.

Download our Mobile App

Dr. Nivash Jeevanandam
Nivash holds a doctorate in information technology and has been a research associate at a university and a development engineer in the IT industry. Data science and machine learning excite him.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

Career Building in ML & AI

31st May | Online

Rakuten Product Conference 2023

31st May - 1st Jun '23 | Online

MachineCon 2023 India

Jun 23, 2023 | Bangalore

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR