Now Reading
MIT Releases New Framework For Machines To Work As Radiologist

MIT Releases New Framework For Machines To Work As Radiologist

  • In order to improve machine learning algorithms' interpretive abilities, scientists explore underused radiology reports that accompany medical images.

MIT Computer Science & Artificial Intelligence Lab (CSAIL), United States, employs an underused resource to help machine learning algorithms better analyse medical images; radiology reports are included with the images. 

According to MIT News, accurately evaluating an X-ray or a medical image is critical to a patient’s health and may even save a life. Due to the fact that obtaining such an examination is contingent on the availability of a trained radiologist, a speedy reaction is not always possible.


Ruizhi Ray Liao, a postdoctoral researcher at MIT’s CSAIL, said, “Our goal is to teach machines capable of recreating what radiologists perform on a daily basis.”

While the concept of using computers to interpret images is not new, the MIT-led team is utilising a previously underutilised resource — the vast body of radiology reports that accompany medical images and are written by radiologists in routine clinical practice — to enhance the interpretive capabilities of machine learning algorithms. Additionally, the team is leveraging a notion from information theory called mutual information — a statistical measure of the interdependence of two distinct variables — to bolster their approach’s success.

The following is how it works: 

See Also

  • To begin, a neural network is taught to detect the extent of a disease, such as pulmonary oedema, by presenting it with a large number of X-ray pictures of patients’ lungs, as well as a doctor’s severity rating for each instance. 
  • That information is contained within a series of numbers. Text is represented by a distinct neural network, which uses a different set of integers to represent its information. 
  • The information from images and text is then integrated by a third neural network in a coordinated approach that maximises the mutual information between the two datasets.

Polina Golland, a principal investigator at CSAIL, stated that “When the reciprocal information between images and text is high, images are highly predictive of the text, and the text is highly predictive of the images.”

The work was supported by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, Wistron, the MIT-IBM Watson AI Lab, the MIT Deshpande Center for Technological Innovation, the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic), and the MIT Lincoln Lab.

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top