Tuesday, December 10, 2024

Artificial Intelligence news

How to use Sora,...

MIT Technology Review’s How To series helps you get things done.  Today, OpenAI released its...

The US Department of...

The US Department of Defense has invested $2.4 million over two years...

OpenAI’s new defense contract...

At the start of 2024, OpenAI’s rules for how armed forces might...

Google DeepMind’s new AI...

Google DeepMind has unveiled an AI model that’s better at predicting the...
HomeArtificial IntelligenceMIT researchers combine...

MIT researchers combine deep learning and physics to fix motion-corrupted MRI scans



Compared to other imaging modalities like X-rays or CT scans, MRI scans provide high-quality soft tissue contrast. Unfortunately, MRI is highly sensitive to motion, with even the smallest of movements resulting in image artifacts. These artifacts put patients at risk of misdiagnoses or inappropriate treatment when critical details are obscured from the physician. But researchers at MIT may have developed a deep learning model capable of motion correction in brain MRI.

“Motion is a common problem in MRI,” explains Nalini Singh, an Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic)-affiliated PhD student in the Harvard-MIT Program in Health Sciences and Technology (HST) and lead author of the paper. “It’s a pretty slow imaging modality.”

MRI sessions can take anywhere from a few minutes to an hour, depending on the type of images required. Even during the shortest scans, small movements can have dramatic effects on the resulting image. Unlike camera imaging, where motion typically manifests as a localized blur, motion in MRI often results in artifacts that can corrupt the whole image. Patients may be anesthetized or requested to limit deep breathing in order to minimize motion. However, these measures often cannot be taken in populations particularly susceptible to motion, including children and patients with psychiatric disorders. 

The paper, titled “Data Consistent Deep Rigid MRI Motion Correction,” was recently awarded best oral presentation at the Medical Imaging with Deep Learning conference (MIDL) in Nashville, Tennessee. The method computationally constructs a motion-free image from motion-corrupted data without changing anything about the scanning procedure. “Our aim was to combine physics-based modeling and deep learning to get the best of both worlds,” Singh says.

The importance of this combined approach lies within ensuring consistency between the image output and the actual measurements of what is being depicted, otherwise the model creates “hallucinations” — images that appear realistic, but are physically and spatially inaccurate, potentially worsening outcomes when it comes to diagnoses.

Procuring an MRI free of motion artifacts, particularly from patients with neurological disorders that cause involuntary movement, such as Alzheimer’s or Parkinson’s disease, would benefit more than just patient outcomes. A study from the University of Washington Department of Radiology estimated that motion affects 15 percent of brain MRIs. Motion in all types of MRI that leads to repeated scans or imaging sessions to obtain images with sufficient quality for diagnosis results in approximately $115,000 in hospital expenditures per scanner on an annual basis.

According to Singh, future work could explore more sophisticated types of head motion as well as motion in other body parts. For instance, fetal MRI suffers from rapid, unpredictable motion that cannot be modeled only by simple translations and rotations. 

“This line of work from Singh and company is the next step in MRI motion correction. Not only is it excellent research work, but I believe these methods will be used in all kinds of clinical cases: children and older folks who can’t sit still in the scanner, pathologies which induce motion, studies of moving tissue, even healthy patients will move in the magnet,” says Daniel Moyer, an assistant professor at Vanderbilt University. “In the future, I think that it likely will be standard practice to process images with something directly descended from this research.”

Co-authors of this paper include Nalini Singh, Neel Dey, Malte Hoffmann, Bruce Fischl, Elfar Adalsteinsson, Robert Frost, Adrian Dalca and Polina Golland. This research was supported in part by GE Healthcare and by computational hardware provided by the Massachusetts Life Sciences Center. The research team thanks Steve Cauley for helpful discussions. Additional support was provided by NIH NIBIB, NIA, NIMH, NINDS, the Blueprint for Neuroscience Research, part of the multi-institutional Human Connectome Project, the BRAIN Initiative Cell Census Network, and a Google PhD Fellowship.



Article Source link and Credit

Continue reading

Study: Browsing negative content online makes mental health struggles worse

People struggling with their mental health are more likely to browse negative content online, and in turn, that negative content makes their symptoms worse, according to a series of studies by researchers at MIT.The group behind the research...

Want to design the car of the future? Here are 8,000 designs to get you started.

Car design is an iterative and proprietary process. Carmakers can spend several years on the design phase for a car, tweaking 3D forms in simulations before building out the most promising designs for physical testing. The details and...

MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia.The 10 delegates to...