A typical MRI scan can last 40 to 60 minutes. Additionally, clinicians have to spend up to an hour gathering sufficient data for a diagnostic MRI examination. Last year, researchers from Facebook AI and NYU Langone Health released FastMRI, a new way to use AI to accelerate the MRI scanning process. But, how good are these AI-based methods when compared to the traditional ones?
Researchers at the University Of Oslo disagree with the extended usage of deep learning. For instance, MRI is based on sampling the Fourier transform, whereas CT is based on sampling the Radon transform. These are rather different models, yet instability persists for both sampling modalities when using deep learning. According to them, there are several potential forms of instabilities in image reconstruction:
- Instabilities with respect to certain tiny perturbations,
- Instabilities with respect to small structural changes and
- Instabilities with respect to changes in the number of samples.
Neural networks are usually retrained on any subsampling pattern. But, more samples may cause the quality of image reconstruction to deteriorate. So, the retraining has to be performed on every specific subsampling pattern, subsampling ratio, and dimensions used. To evaluate AI’s potential in clinical trials, especially MRI scans, researchers at Stanford compared trained and untrained neural networks against current non-AI-based image reconstruction methods. Trained networks rely on high-quality examples to train against perturbations in a supervised manner. In contrast, untrained networks represent cutting-edge advances in unsupervised AI that do not require any training data at all.
In this work, the researchers studied robustness in the context of accelerated multi-coil MRI reconstruction because this is one of the most popular applications of compressive sensing and an important medical imaging technology.
Convolutional neural networks are trained to map the measurement directly to an artifact-free image or map from a coarse least-squares reconstruction from the under-sampled measurement to an artifact-free image. The best-performing methods in the fastMRI competition are all trained networks and yield significant improvements over classical methods. Traditional CS methods are trendy in MRI reconstruction, and are used in clinical practice. Untrained networks are also powerful for compressive sensing, and simple convolutional architectures such as the Deep Decoder work well in practice.
For the experiments, the researchers picked ten randomly-chosen proton-density-weighted knee MRI scans from the fastMRI validation set. For each of those images, a small perturbation was added to the measurement. The results showed that both trained and untrained methods are sensitive to small adversarial perturbations.
For the next experiment to check for dataset shift, the researchers tested on the Stanford dataset retrieved by collecting all available 18 knee volumes. “Our main finding is that all reconstruction methods perform worse on the new MRI samples, but by a similar amount. Moreover we find that challenging images are naturally difficult to reconstruct, since both trained and untrained methods are equally prone to this shift,” wrote the researchers.
This paper studied the robustness (i) against small adversarial perturbations, (ii) to distribution shifts, and (iii) in recovering fine details for three families of MRI reconstruction methods: trained deep networks, un-trained deep networks, and classical sparsity-based approaches. The findings are as follows:
- Both deep-learning-based as well as classical sparsity-based image reconstruction methods, are sensitive to small, adversarially-selected perturbations.
- The performance ranking of the methods typically remains accurate even under distribution shifts.
- The reconstruction accuracy is correlated with small feature recovery, and hence trained neural networks improve performance for recovering fine details of an image.
- Deep learning methods that perform best based on reconstruction accuracy are also best under realistic distribution shifts and small feature recovery, and the researchers could not find them to be more sensitive to adversarial perturbations.
Read more here.