Motion Correction Strategies
Acq/Recon/Analysis Monday, 17 May 2021
Digital Poster
1352 - 1370
1371 - 1390

Oral Session - Motion Correction Strategies
Acq/Recon/Analysis
Monday, 17 May 2021 16:00 - 18:00
  • Motion-corrected 3D-EPTI with 4D navigator for fast and robust whole-brain quantitative imaging
    Zijing Dong1,2, Fuyixue Wang1,3, Jie Xiang4, and Kawin Setsompop5,6
    1Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States, 2Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, United States, 3Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, United States, 4Tsinghua University, Beijing, China, 5Department of Radiology, Stanford University, Stanford, CA, United States, 6Department of Electrical Engineering, Stanford University, Stanford, CA, United States
    A motion-correction method is developed for 3D-EPTI with 4D-navigator acquisition that achieves accurate estimation of 3D motion and B0-inhomogeneity changes, allowing effective correction for fast and motion-robust quantitative imaging with negligible cost in scan efficiency. 
    Figure 5. Results of in-vivo experiment. (a) The estimated motion parameters using the 4D navigator compared with the reference estimation using high-resolution volumes. (b) Comparison of the estimated B0 changes relative to the first TR at two different head positions using the proposed navigator and reference acquisition. (c) Reconstructed quantitative parameters using 3D-EPTI data without motion (first row), with motion but without correction (middle row), and with motion correction (bottom row).
    Figure 1. (a) Illustration of the IR-GE 3D-EPTI acquisition with 4D navigator. In each TR, a 4D block is acquired at each TI. 20 TIs are acquired for IR-GE with a golden-angle radial-block sampling, followed by additional 4 TIs for navigation that utilizes the same sampling in every TR. The optimized spatiotemporal encoding used for the block-wise 4D acquisition is shown in (b). 3D motion parameters and B0-inhomogeneity changes across the whole brain can be estimated from the reconstructed multi-echo navigators.
  • MERLIN: Motion Insensitive Silent Neuroimaging
    Emil Ljungberg1, Tobias Wood1, Ana Beatriz Solana2, Steven C.R. Williams1, Gareth J. Barker1, and Florian Wiesinger1,2
    1Neuroimaging, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, United Kingdom, 2ASL Europe, GE Healthcare, Munich, Germany
    We demonstrate silent motion corrected neuroimaging using a ZTE sequence together with an interleaved spiral phyllotaxis k-space trajectory. A silent and motion insensitive MRI protocol can reduce failed or refused scans, and thus save time and money in both clinical and research settings.
    Figure 4: Comparison of image quality without motion correction, with MERLIN motion correction (MOCO), and without motion. A clear improvement in image quality is observed with MOCO.
    Figure 3: Animated figure showing the estimated translations (Δxyz) and rotations (αx, αy, αz), which show a pattern similar to the motion paradigm described in Figure 2A. Gray section in the beginning indicate dummy and low-resolution spokes, for filling the dead-time gap.
  • Visualizing the cerebellar cortical layers with prospective motion correction
    Nikos Priovoulos1, Mads Andersen2, Vincent O Boer3, and Wietske van der Zwaag1
    1Spinoza Center, Amsterdam, Netherlands, 2Philips Healthcare, Copenhagen, Denmark, 3Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Hvidovre, Denmark
    Cerebellar cortical layers can be reproducibly visualized at 7Tesla by interleaving a susceptibility-weighted FLASH sequence with fat-navigator based prospective motion correction in MRI-naïve individuals.
    Figure 4: Magnitude (A) and phase (B) sagittal slices of the cerebellum. C-D, zoomed views. Note the white stripes in D (examples with black arrows), next to the WM, as reported before in2. These stripes were consistent across successive slices (E,G,I: magnitude, F,H,J: phase) and are likely to be the granular ceberellar layer. Note they fell within the cerebellar gray matter, as seen in the magnitude image (blue bar in I,J).
    Figure 2: Sagittal slices of the cerebellum (magnitude image; 0.2x0.2mm2 in-plane resolution). Each row is a participant; left column: PMC off, right column: PMC on. The white boxes show zoomed views of the cerebellum. AES (average edge strength) is a measure of sharpness; FD (framewise-displacement) is a measure of shot-to-shot motion as determined posthoc by the fat-navigators. Note the marked improvements in image quality when using PMC.
  • Motion Estimation for Brain Imaging at Ultra-High Field Using Pilot-Tone: Comparison with DISORDER Motion Compensation
    Tom Wilkinson1,2, Felipe Godinez1,2, Yannick Brackenier1,2, Raphael Tomi-Tricot1,2,3, Lucilio Cordero-Grande1,2,4, Philippa Bridgen1,2, Sharon Giles1,2, Joseph V Hajnal1,2, and Shaihan J Malik1,2
    1Biomedical Engineering Department, School of Biomedical Engineering and Imaging Sciences, Kings College London, London, United Kingdom, 2Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, Kings College London, London, United Kingdom, 3MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom, 4Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BNN, Madrid, Spain
    A ‘pilot-tone’ for 7T head MRI was constructed by broadcasting RF into the scanner room during data acquisition. This signal was shown to enable motion estimation, subsequently these estimates were compared with others obtained from DISORDER joint motion estimation & reconstruction method. 
    Figure 2: Hybrid k-space (left) containging pilot-tone signal and amplitude (middle) and phase (right) traces with motion encoded in each coil. Each color line is a channel. The data shown is from the DISORDER set.
    Figure 5: DISORDER reconstructed images. Top row: Un-corrected; Middle row: Pilot-tone motion corrected; Bottom row: Pilot-tone motion corrected + corrected for pose-dependent fields.
  • PET/MR respiratory motion gating for free
    Florian Wiesinger1, Timothy Deller2, Floris Jansen2, Jose de Arcos Rodriguez1, Ronny R Buechel3, Philipp A Kaufmann3, and Edwin EGW ter Voert3
    1GE Healthcare, Munich, Germany, 2GE Healthcare, Waukesha, WI, United States, 3Department of Nuclear Medicine, University Hospital Zurich, Zurich, Switzerland
    The presented method provide an accurate respiratory waveform from listmode PET data.  This can then be used for retrospective motion gating of the acquired MR and/or PET data.  The method comes for free without requiring an extra motion sensor or complicating the PET/MR imaging workflow.  
    Figure 1 (animated): High temporal-framerate PET reconstruction (Δt=1s) in the transient phase ~8mins after the injection of the Ammonia PET tracer. Corresponding respiratory waveforms (bottom) obtained using either Principal Component Analysis (PCA, black) or a pencil beam navigator over the lung-liver interface (red). This patient demonstrates a deep, regular diaphragmatic breathing pattern.
    Figure 2 (animated): ZTE lung images corresponding to the Ammonia PET tracer patient shown in Figure 1. Because of the deep diaphragmatic breathing the uncorrected (averaged, left) images show strong motion blurring (especially at the lung-liver interface). Soft-gated respiratory binning (7 phases, 2nd column) resolves the diaphragmatic breathing cycle into 7 phases. Most of the data are acquired in the end-expiratory phase (3rd column) also providing the sharpest image. Its Maximum Intensity Project (MIP, right) depicts the vascular anatomy and lung lesions in fine detail.
  • Separable motion estimation and correction for 2D TSE imaging using a rapid 3D volumetric scout acquisition
    Daniel Polak1,2, Daniel Nicolas Splitthoff1, Berkin Bilgic2,3,4, Lawrence L. Wald2,3,4, Kawin Setsompop5, and Stephen F. Cauley2,3,4
    1Siemens Healthcare GmbH, Erlangen, Germany, 2Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States, 3Department of Radiology, Harvard Medical School, Boston, MA, United States, 4Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States, 5Department of Radiology, Stanford, Stanford, CA, United States
    A rapid 3D volumetric scout scan facilitates efficient estimation of 3D motion within 2D TSE imaging data. Using a 3D volumetric image reconstruction, our approach achieves robust artifact reduction and was evaluated in-vivo for representative motion trajectories.
    Figure 1: Alternating optimization is computationally demanding as repeated updates of the motion vector θ and image estimate x are needed. SAMER speeds up the optimization by utilizing a scout scan as an image prior xsc. This avoids the need for repeated updates of x during the motion estimation. In this work, SAMER is extended to 2D TSE imaging and motion parameters are estimated from a low-resolution 3D SPACE scout. This is feasible as each shot of the imaging scan (green circles) has common frequency overlap with the low-resolution scout (dashed orange box).
    Figure 4: In Acq. 1 (mostly in-plane rotation), SAMER mitigated most motion artifacts, which allowed fine anatomical structures, such as a blood vessel to be recovered (yellow arrow in zoom-in). The image quality improvement is also reflected by a decrease in data-consistency error (DC). In Acq 2. (through-plane rotation), SAMER also yielded robust artifact reduction, however, the zoom-in shows a small loss of spatial resolution (red arrow).
  • Automated motion correction of multi-slice fetal brain MRI using a deep recursive framework
    Wen Shi1,2,3, Jiwei Sun1, Yamin Li3, Cong Sun4, Tianshu Zheng1, Yi Zhang1, Guangbin Wang4, and Dan Wu1
    1Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China, 22. Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States, 3School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China, 4Department of Radiology, Shandong Medical Imaging Research Institute, Cheeloo College of Medicine, Shandong University, Jinan, China
    We proposed an automated motion correction of multi-slice fetal brain MRI using a deep recursive framework, which showed superior performance compared to the-state-of-art learning-based motion estimation algorithms in fetal imaging.
    Figure 2. Automated motion estimation and reconstruction pipeline. The network consists of $$$N$$$ iterations between motion estimation blocks and a registration-based 3D reconstruction block. (a) Motion estimation block. (b) The proposed deep recursive framework, including motion estimation blocks from three orthogonal orientations and a rigid slice-to-volume registration-based 3D reconstruction transformer.
    Figure 3. The performance of different motion estimation algorithms on the simulated motion-corrupted data. (a) Mean absolute error (MAE) and root mean square error (RMSE) metrics of rotation angle. (b) One typical case (31-week GA fetus in the testing set) of the motion correction using deep recursive framework (DeepRF). DeepPE: Deep Pose Estimation, DeepPMT: Deep Predictive Motion Tracking.
  • LAPNet: Deep-learning based non-rigid motion estimation in k-space from highly undersampled respiratory and cardiac resolved acquisitions
    Thomas Küstner1,2, Jiazhen Pan3, Haikun Qi2, Gastao Cruz2, Kerstin Hammernik3,4, Christopher Gilliam5, Thierry Blu6, Sergios Gatidis1, Daniel Rueckert3,4, René Botnar2, and Claudia Prieto2
    1Department of Radiology, Medical Image and Data Analysis (MIDAS), University Hospital of Tübingen, Tübingen, Germany, 2School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom, 3AI in Medicine and Healthcare, Klinikum rechts der Isar, Technical University of Munich, München, Germany, 4Department of Computing, Imperial College London, London, United Kingdom, 5RMIT, University of Melbourne, Melbourne, Australia, 6Chinese University of Hong Kong, Hong Kong, Hong Kong
    A novel deep learning non-rigid registration in k-space inspired by optical flow is proposed. For highly accelerated acquisitions of respiratory and cardiac motion, this enables aliasing-free motion estimation which shows superior accuracy to conventional image-based registrations.
    Fig. 1: Proposed LAPNet to perform non-rigid registration in k-space. Moving νm and reference νr k-spaces are tapered to a smaller support W. The bundle of k-space patches is processed in a succession of convolutional filters (kernel sizes and channels are stated) to estimate the in-plane flows u1,u2 at the central voxel location determined by the tapering T/window W for size 33x33x33. Overall 3D deformation field u is obtained from a sliding window over all voxels in orthogonal directions.
    Fig. 2: Respiratory non-rigid motion estimation in a patient with a liver metastasis in segment VIII. Motion displacement is estimated by the proposed LAPNet in k-space in comparison to image-based non-rigid registration by FlowNet-S (neural network) and NiftyReg (cubic B-Spline). Estimated flow displacement are depicted in coronal and sagittal orientation. Undersampling was performed prospectively with a 3D Cartesian random undersampling for 8x and 30x acceleration.
  • Nonrigid Motion-corrected Reconstruction Using Image-space Gridding for Free-breathing Cardiac MRI
    Kwang Eun Jang1,2, Mario O. Malavé1, Dwight G. Nishimura1, and Shreyas S. Vasanawala3
    1Magnetic Resonance Systems Research Lab (MRSRL), Department of Electrical Engineering, Stanford University, Stanford, CA, United States, 2Department of Bioengineering, Stanford University, Stanford, CA, United States, 3Department of Radiology, Stanford University, Stanford, CA, United States
    We propose image-space gridding that resamples images onto arbitrary grids, which provides a pair of operators that represents the forward and adjoint of a nonrigid transform. This allows existing nonrigid image registration techniques to be incorporated into model-based reconstructions. 

    Figure 4. Reconstructed slices near LAD and RCA. The proposed method exhibited improved sharpness not only near the coronary arteries (orange arrows) but also in non-cardiac regions (green arrows) compared to 3D translational motion-corrected reconstruction.

    Figure 1. Overview. (a) For each heartbeat, we collect segmented cones interleaves that sparsely cover the entire k-space along with another set of cones interleaves for the 3D iNAVs. (b) 3D translational motion is estimated using the iNAVs. Segmented cones data from individual heartbeats are binned to reconstruct image-based self navigators. (c) Nonrigid motion is estimated using an existing image registration method. (d) The nonrigid motion-corrected reconstruction is achieved by solving an optimization problem using image-space gridding operators.
  • Forward-Fourier Motion-Corrected Reconstruction for Free-Breathing Liver DCE-MRI
    Sihao Chen1, Cihat Eldeniz1, Weijie Gan1, Ulugbek Kamilov1, Tyler Fraum1, and Hongyu An1
    1Washington University in St. Louis, Saint Louis, MO, United States
    We proposed a forward-Fourier motion-corrected DCE incorporating motion derived from a Phase2Phase network reconstructed 4D MRI. This approach allows for free-breathing continuous motion-free DCE with high temporal resolution on severely undersampled data.
    Figure 3. Temporal dynamics of all 17 DCE contrasts reconstructed using FF MoCo P2P MVF on the patient shown in Figure 1-2. Each contrast was reconstructed using 20 seconds of non-overlapping data.
    Figure 2. Example contrast 1 pre-injection 3D MoCo images reconstructed using Spatial MoCo and FF MoCo with MCNUFFT, CS and P2P MVF.
Back to Top
Digital Poster Session - Motion: Methods
Acq/Recon/Analysis
Monday, 17 May 2021 17:00 - 18:00
  • DeepResp: Deep Neural Network for respiration-induced artifact correction in 2D multi-slice GRE
    Hongjun An1, Hyeong-Geol Shin1, Sooyeon Ji1, Woojin Jung1, Sehong Oh2, Dongmyung Shin1, Juhyung Park1, and Jongho Lee1
    1Department of Electrical and computer Engineering, Seoul National University, Seoul, Korea, Republic of, 2Division of Biomedical Engineering, Hankuk University of Foreign Studies, Gyeonggi-do, Korea, Republic of
    A new deep-learning method correcting for respiration-induced artifacts is designed. This method extracts B0 fluctuation from a GRE image, providing reliability in the correction. The results show a successful correction of the artifacts.
    Figure 1. Overview of DeepResp. (a) Overview of DeepResp. DeepResp is designed to extract the phase error from a GRE image. (b) Architecture of a two-stage neural network in DeepResp. The first stage extracts the differential values of the phase errors. The second stage accumulates the differential values, generating the phase errors.
    Figure 2. In-vivo results of DeepResp in deep breathing. (a) Two slices (first and third rows) and their zoomed-in images (second and fourth rows) are shown. Artifacts (red and yellow arrows) are clearly reduced in DeepResp- and navigator-corrected images. The phase errors show very high correlations between the results of DeepResp (red line) and the navigator (black line). (b) Quantitative metrics of NRMSE report improvements in all subjects (red: DeepResp-corrected; blue: uncorrected images)
  • Retrospective motion correction for Fast Spin Echo  based on conditional GAN with entropy loss
    Qingjia Bao1, Yalei Chen2, Pingan Li2, Kewen Liu2, Zhao Li3, Xiaojun Li2, Fang Chen3, and Chaoyang Liu3
    1Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot, Israel, 2School of Information Engineering, Wuhan University of Technology, Wuhan, China, 3State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences., Wuhan, China
    We proposed a new end-to-end motion correction method based on conditional generative adversarial network (GAN) and minimum entropy of MRI images for FSE sequence.
    FIGURE 1 (a)The overall architecture of the proposed cGAN-based method.(b) The discriminator framework, consists of 5 cascaded convolution layers.(c) The generator framework, it contains 5 encode and 5 decode block, and 7 cascaded Resblock.(d)The entropy curve, shows that the motion amplitude increase, the entropy increase.
    FIGURE 3 Mild and strong motion correction results of various methods on multi-shot FSE sequence. Columns of the first is the mild motion result and the third is strong. The columns from left to right show reference images, motion-affected images, motion corrected images using TV Denoiser, DnCNN, UNET, and our method, respectively. In the second and fourth row, the absolute error maps corresponding to the first and third row are presented. In each motion corrected image, the correction quantitatively (PSNR, SSIM, MSE) can be seen in comparison to the motion-free reference.
  • Motion correction in MRI with large movements using deep learning and a novel hybrid loss function
    Lei Zhang1, Xiaoke Wang1, Michael Rawson2, Radu Balan3, Edward H. Herskovits1, Linda Chang1, Ze Wang1, and Thomas Ernst1
    1Department of Diagnostic Radiology & Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, United States, 2Department of Mathematics, University of Maryland, College Park, MD, United States, 3Department of Mathematics and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD, United States
    We developed a novel deep learning approach for correction of large movements in brain MRI. The proposed method improved image quality compared with the motion corrupted images in terms of a quantitative metric and visual assessment by experienced readers.
    Rotation plus translation results. The first row of each subfigure contains clean image, corrupted image, motion correction result of L1, motion correction result of L1+TV, and motion correction result of L1+TV ft, respectively. The second row of each subfigure shows motion trajectory, residual image between corrupted image and clean image, residual image between motion correction result of L1 and clean image, residual image between motion correction result of L1+TV and clean image, and residual image between motion correction result of L1+TV ft and clean image, respectively.
    (a) SSIM (mean ± std) of the motion-corrupted images, L1, L1 + TV, and L1+TV-ft, respectively. (b) The proposed method successfully reduced the effect of motion. SSIM against the reference image is plotted as a measure of image quality (0 is the lowest, 1 is the highest). The quality of MC-Net predication is similar for input images with only rotational motion and those with both rotational and translational motion.
  • Fast abdominal T2 weighted PROPELLER using deep learning-based acceleration of parallel imaging
    Motohide Kawamura1, Daiki Tamada1, Masahiro Hamasaki2, Kazuyuki Sato2, Tetsuya Wakayama3, Satoshi Funayama1, Hiroyuki Morisaka1, and Hiroshi Onishi1
    1Department of Radiology, University of Yamanashi, Chuo, Japan, 2Division of Radiology, University of Yamanashi Hospital, Chuo, Japan, 3MR Collaboration and Development, GE Healthcare, Hino, Japan
    We developed a framework for fast PROPELLER of abdominal T2W imaging. Deep learning enabled a higher parallel imaging factor than SENSE reconstruction, suggesting the reduction of scan times of PROPELLER.
    Reconstructed images with CG-SENSE and the proposed method, and ground truth images. Compared to CG-SENSE, noises are reduced by the proposed method. (A) CG-SENSE, (B) the proposed method, (C) ground truth. (D)-(F) Magnified images of the solid boxes in (A)-(C), respectively.
    Schemas of parallel imaging (PI) reconstruction of blade images for our accelerated PROPELLER. The neural network achieves a higher PI factor than the standard SENSE reconstruction, enabling faster acquisition.
  • Deep Learning-Based Respiratory Navigator Echo (DLnav) for Robust Free-Breathing Abdominal MRI
    Yuji Iwadate1, Atsushi Nozaki1, Shigeo Okuda2, Tetsuya Wakayama1, and Masahiro Jinzaki2
    1Global MR Applications and Workflow, GE Healthcare Japan, Hino, Japan, 2Department of Radiology, Keio University School of Medicine, Tokyo, Japan
    We propose a deep learning-based respiratory navigator (DLnav) technique which uses a convolutional neuronal network. DLnav resulted in good synchronization with actual respiratory motion and reduced motion-induced blurring.
    FIG. 1: Schematic description of DLnav processing.
    FIG. 3: Respiratory waveforms and signal-to-noise ratio of respiratory-like frequencies (SNRR) calculated with the conventional (green) and DLnav methods (yellow). a, b: Respiratory waveforms overlapped on the navigator signals with the tracker at the normal position (a) and the tracker on the heart (b). c: SNRR with the tracker of the normal position (Normal) and the tracker positioned on the heart (Heart). DLnav showed a good synchronization with the actual respiratory motion even at the bad tracker position on the heart.
  • Deep Learning-Based Rigid-Body Motion Correction in MRI using Multichannel Data
    Miriam Hewlett1,2, Ivailo E Petrov2, and Maria Drangova1,2
    1Medical Biophysics, Western University, London, ON, Canada, 2Robarts Research Institute, London, ON, Canada
    Motion correction in single-channel images prior to coil combination improved performance compared to motion correction on coil-combined images. Simultaneous motion correction of multichannel data produced the worst result, likely a result of the model's limited modelling capacity.
    Figure 3. Mean absolute error (MAE, mean and standard deviation) and structural similarity index (SSIM, mean and standard deviation) comparing the ground truth results to the uncorrected images (black), as well as images corrected with the combined (yellow), single-channel (blue), and multichannel (red) models. All differences are significant (p < 0.05).
    Figure 4. Example images for each contrast; T2-weighted (top), T1-weighted (middle), and FLAIR (bottom). On the left are the ground truth images, and on the right those containing simulated motion artefacts. The centre three images are those corrected with the combined, single-channel, and multichannel models (from left to right).
  • Prospective motion assessment within multi-shot imaging using coil mixing of the data consistency error and deep learning
    Julian Hossbach1,2,3, Daniel Nicolas Splitthoff3, Bryan Clifford4, Daniel Polak3, Stephan F. Cauley5, and Andreas Maier1
    1Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany, 2Erlangen Graduate School in Advanced Optical Technologies, Erlangen, Germany, 3Siemens Healthcare GmbH, Erlangen, Germany, 4Siemens Medical Solutions, Boston, MA, United States, 5Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, United States
    Using a small motion-free scout, our method can prospectively detect and assess patient's motion. For that, a neural network is trained to score the motion based on a coil mixing error matrix. We show that this can be used to remove or reacquire the N most affected ET to improve the image quality.
    Fig. 5: Reconstruction of the motion corrupted k-space by removing or replacing the highest ranked ET by the NN (RMSE: red; SSIM yellow). The curve in the plot below shows the ground truth motion; the numbers represent the ranking of the motion severity obtained by the NN. For simplicity only rotation is simulated in this example.
    Fig. 3: Structure of the Neural network with respective output sizes (left) and the visualization of the MS (right).
  • Retrospective motion compensation for spiral brain imaging with a deep convolutional neural network
    Quan Dou1, Zhixing Wang1, Xue Feng1, John P. Mugler2, and Craig H. Meyer1
    1Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 2Radiology & Medical Imaging, University of Virginia, Charlottesville, VA, United States
    A deep convolutional neural network was implemented to retrospectively compensate for motion in spiral imaging. The network was trained on images with simulated motion artifacts and tested on both simulated and in vivo data. The image quality was improved after the motion correction.
    Figure 1. Motion simulation strategy for spiral sampling (A), and network architecture adopted in this study (B).
    Figure 2. Representative motion compensation results on simulated data, from subjects 1 (A), 2 (B), and 3 (C).
  • Rigid motion artifact correction in multi-echo GRE using navigator detection and Parallel imaging reconstruction with Deep Learning
    Seul Lee1, Jae-Hun Lee1, Soozy Jung1, and Dong-Hyun Kim1
    1Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea, Republic of
    We propose a rigid motion artifact correction framework, which eliminates the motion-corrupted phase encoding lines detected by navigator echoes and reconstructs motion-compensated images using parallel imaging with deep learning. 
    Figure 1. Proposed rigid motion correction process.
    Figure 4. The resultant mGRE images corrected from real motion-corrupted images.
  • Motion Correction and Registration Networks for Multi-Contrast Brain MRI
    Jongyeon Lee1, Byungjai Kim1, Wonil Lee1, and HyunWook Park1
    1Korean Advanced Institute of Science and Technology, Daejeon, Korea, Republic of
    Our proposed motion correction method for multi-contrast brain MRI utilizes the registration network for fast image alignment and the multi-output network for motion correction of multi-contrast MR images. The proposed framework successfully reduces motion artifacts of all contrasts.
    Figure 1: a) An overview of the proposed method. The input images can contain motion-free images and motion-corrupted images. The output images are the motion-corrected images if the corresponding input images are motion-corrupted, whereas the outputs are almost the same as the input images if the corresponding input images are motion-free. b) The detailed framework of the proposed network with the network hyperparameters.
    Figure 3: Example images of the baseline experiment for a) the synthesized test data and b) the real motion data in T1w, T2w, and FLAIR correction cases. SSIM and NRMSE scores are written on the motion-corrected images. For the real motion test, the red and blue boxes highlight the performance of the proposed method.
  • Learning-based automatic field-of-view positioning for fetal-brain MRI
    Malte Hoffmann1,2, Daniel C Moyer3, Lawrence Zhang3, Polina Golland3, Borjan Gagoski1,4, P Ellen Grant1,4, and André JW van der Kouwe1,2
    1Department of Radiology, Harvard Medical School, Boston, MA, United States, 2Department of Radiology, Massachusetts General Hospital, Boston, MA, United States, 3Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, United States, 4Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA, United States
    Acquiring standard sagittal, coronal and axial planes with fetal brain MRI is challenging as frequent pose changes result in arbitrary orientations. We present a machine learning driven system that automatically prescribes standard orthogonal planes and demonstrate its utility in-vivo.
    Figure 5. In-vivo evaluation in fetuses at 32 and 31 weeks' gestation. The fetal brain is arbitrarily oriented relative to the planes of the MRI scanner. The network detects the brain, left (red) and right (green) eyes from the full-uterus scout, oriented along the device axes (sagittal to the mother). From these landmarks, we derive the head pose. The target sequence automatically acquires standard sagittal, coronal and axial views of the brain. We repeat the scout before each anatomical acquisition. The second fetus required several attempts due to substantial subject motion.
    Figure 2. Automatic field-of-view prescription. First, the operator acquires a rapid full-uterus scout. Second, an external laptop receives the scout via TCP and hosts the network that detects the fetal brain and eyes. Third, we construct an anatomical basis from these landmarks. Finally, the target sequence receives the anatomical frame and acquires sagittal, coronal or axial slices of the brain according to the operator’s selection. All communications are automatic, and the procedure can be repeated as needed to acquire different views or to respond to fetal head-pose changes.
  • Computer vision object tracking for MRI motion estimation
    Stefan Wampl1, Tito Körner1, Martin Meyerspeer1, Marcos Wolf1, Maxim Zaitsev1, and Albrecht Ingo Schmid1
    1High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
    The capabilities of established computer vision object tracking algorithms on MR images is demonstrated. The image-based object trackers available from OpenCV are a perfect fit for retrospective and prospective motion correction methods.
    Object tracking on (a) a coronal and (b) a parasagittal image series of the abdomen and thoracal borders (3T, TrueFISP, FOV 384x384 mm2, 256x256 matrix, TR/TE 360/1.46) with bounding boxes independently tracking different landmarks on the images (diaphragm, liver, spleen, kidney). (c) Object tracking performed on a CINE acquisition of the heart. The target is missed during end-systole (red boxes) when the image features change, but reestablishes the position back in diastole.
    Tracking of the heart on a series of fast low-resolution image navigators (sagittal, 256 repetitions, every 8th displayed) as appropriate for prospective motion correction of cardiac MR spectroscopy. The bounding box follows the heart well during several respiratory cycles, with heavy breathing starting from repetition 100. The detected in-plane motion, as indicated by the series of arrows on top, is exaggerated by a factor of 3 for visualization purposes.
  • Listening in on the Pilot Tone: A Simulation Study
    Mario Bacher1,2, Barbara Dornberger2, Jan Bollenbeck2, Matthias Stuber1, and Peter Speier2
    1Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 2Siemens Healthcare Magnetic Resonance, Erlangen, Germany
    Electromagnetic simulations of the Pilot Tone navigator are presented which can be used to better understand the complex location dependent characteristics of the received motion signals.
    Figure 1: Geometry of the virtual phantom: The volume “Field sensor volume” was voxelized at $$$0.8\,mm$$$ isotropic resolution. Voxel size outside this volume was increased automatically where possible. The generator loop is labeled as a white circle. The blue plane represents the analysis plane $$$\Sigma_{ant}$$$. Red and green circles show the location of virtual coils $$$L_{R1}$$$ and $$$L_{R2}$$$ used in the further analysis. Boundary volume was set automatically to $$$(1366\times1217\times1250)\,mm^3$$$ using uniaxially perfectly matched layer (UPML) condition.
    Figure 2: Cardiac volume ground truth (a) and simulated received time-signals $$$u[t_n]$$$ in the virtual coils $$$L_{R1}$$$ (b) and $$$L_{R2}$$$ (c) plotted as modulation depth, i.e. change relative to the mean signal in percent. The approximate position of ECG R-peak is shown in orange in (a). Black arrows in (a) show periods of contraction (systole) and expansion (diastole) in the cardiac volume curve.
  • Simulating the use of active magnetic markers for motion correction using NMR field probes on a 7T scanner
    Laura Bortolotti1 and Richard Bowtell1
    1Sir Peter Mansfield Imaging Centre, University of Nottingham, Nottingham, United Kingdom
    Using simulations, we show that changes in head pose can be accurately estimated from measurements of the fields at 16 field probe positions due to currents pulsed in small coils attached to the head. 
    The figure shows the simulated experimental set-up. Two coils are fixed onto the end-pieces of a pair of glasses. Each solenoidal coil was composed of 100 turns of 0.25mm- diameter wire arranged in 10 layers radially, each formed from 10 turns. The left and right coils are oriented along the y- and z-axes, respectively. The z-component of the magnetic field from the coils is monitored using 16 field probes spanning 21 cm axially and 22 cm azimuthally.
    Coil positions (top) and z-component of the magnetic field4 on a cylindrical surface (bottom, BC ) (0.3A coil current). Probe positions are highlighted using circles (top) or line crossings (bottom). The field at the initial head-position is shown (left), along with the field changes produced by head pose changes during head nodding (middle) and head shaking (left) (motion parameters shown in Figure 3. Complex patterns of field change give high sensitivity to differences in motion.
  • Perceptual motion scoring: An algorithm for automated detection and grading of MRI motion artifacts
    Rafael Brada1, Michael Rotman1, Sangtae Ahn2, and Christopher J. Hardy2
    1GE Reserach, Herzliya, Israel, 2GE Research, Niskayuna, NY, United States
    Using the k-space data of two coil array elements a motion-artifact severity score that matches human perception can be calculated. The motion score was tested on nine T1-FLAIR FSE brain series against human observers’ score, obtaining an R2 value of 0.91.
    Figure 4. a) Correlation between the calculated motion score and the user score for the development set. b) Correlation between the calculated motion score and the user score for the test set, with an R2 value of 0.91
    Figure 5. Sample images taken from each of the nine series in the test set. U = The user assigned score. A = The algorithm assigned score
  • Radial Navigator (radNAV) for Rapid GRE (Turbo-FLASH) Sequence
    Zhe Wu1, Lars Kasper1, and Kamil Uludag1,2,3
    1Techna Institute, University Health Network, Toronto, ON, Canada, 2Koerner Scientist in MR Imaging, University Health Network, Toronto, ON, Canada, 3Center for Neuroscience Imaging Research, Institute for Basic Science and Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Korea, Republic of
    Radial navigator (radNAV) is easy to implement on most kinds of sequences with minimal increase in TE/TR and without the need for non-Cartesian gradients or coil sensitivity extrapolation.
    Figure 3 Three slices from axial, coronal and sagittal planes to demonstrate the motion artifacts and the effect of motion correction using radNAV. After correction, the least square error reduced from 16.96% to 12.44% and the structural similarity index (SSIM) increased from 83.75% to 89.28. Red arrows indicate the motion artifacts before/after the correction.
    Figure 1 A: Sequence diagram for turbo-FLASH with radNAV; B and C: k-space trajectories for one navigator frame. B: sequentially ordered spokes with 15 (black dots) and 30 (red circles) evenly distributed in 3D space as temporal period. C: 15 (black dots) and 30 (red circles) 3D golden angle spokes. Different spoke number in each navigator frame for sequential order (B) completely alters k-space trajectory, while 3D golden angle spokes (C) overlaps given different spoke number, suggesting a sliding window method is applicable for increasing temporal resolution of the navigator frame.
  • Incremental motion correction (iMoCo) for fast retrospective image reconstruction with reduced motion artifacts
    Anuj Sharma1, Samir D Sharma1, and Andrew J Wheaton1
    1Magnetic Resonance, Canon Medical Research USA, Inc., Mayfield Village, OH, United States
    We propose a method for retrospective rigid-body motion correction that uses a subset of imaging data to create a reference image. We demonstrate that the proposed method reduces the reconstruction time by a factor of 2 over the conventional method.
    Figure 1: (a) Joint correction alternates between image and motion estimation for all shots. (b) Incremental correction estimates the initial image using shots with similar motion. Then motion for each shot is sequentially estimated followed by image update. (c) Initial image used in incremental correction has much lower artifacts than the initial image used in joint correction. The final estimate in incremental correction provides the motion corrected image.
    Figure 2: Results from the simulation study. (a) iMoCo was able to recover the image without significant residual artifacts, (b) Images from joint correction were improved as the number of iterations were increased. However, residual artifacts were seen (arrows) even after 80 iterations because the solver had not converged as indicated by the data consistency error plot in (c).
  • SMS-EPI real-time motion correction by receiver phase compensation and coil sensitivity interpolation
    Bo Li1, Ningzhi Li2, Ze Wang1, and Thomas Ernst1
    1Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Baltimore, MD, United States, 2U.S. Food Drug Administration, Silver Spring, MD, United States
    By utilizing the spatial sensitivity interpolation, the split slice-GRAPPA and SENSE techniques have ability to separate slices from collapsed multislice images in real-time motion correction for SMS-EPI sequence. 
    Fig. 3. SMS-EPI reconstruction of SSG and SENSE with original coil sensitivity maps (oCSM) and updated coil sensitivity maps (uCSM) at maximum motion velocity (a), and corresponding motion parameters for single TR period (b). Images reconstructed without uCSM exhibit aliasing artifacts, as indicated in zoomed views (red boxes). Conversely, reconstructions with uCSM show clearer inner structure and boundaries as well as much less aliasing artifacts.
    Fig. 4. SSG and SENSE reconstructions of stationary phantom (x/y/z-translation -2.4/6.4/21.5mm; x/y/z-rotation 10.5°/5.2°/-0.5°). Residual aliasing artifacts appear in most slices of the SSG with oCSM (white arrows), whereas the SSG with uCSM show reduced artifacts and clearer inner structure and boundaries. SENSE images with oCSM show aliasing artifacts (red boxes), while uCSM improves delineation of structures and eliminates slice-aliasing artifacts.
  • Self-navigating 3D-EPI Sequence for Prospective Motion Correction
    Samuel Getaneh Bayih1, Ernesta Meintjes1, Marcin Jankiewicz1, and Andre van der Kouwe 2,3
    1MRT/UCT Medical Imaging Research Unit, Department of Human Biology, University of Cape Town, Cape Town, South Africa, 2Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States, 3Department of Radiology, Harvard Medical School, Boston, MA, United States
    Volumetric self-navigators constructed from a subset of partitions during repeated 3D-EPI acquisitions are able to accurately detect and correct motion in real time, without additional pulses or hardware, enabling motion-robust 3D-EPI functional MRI.
    Figure 1: Self-navigating 3D-EPI sequence
    Figure 2: Volumetric images acquired (a) before motion occurred (reference volume), (b) when motion occurred, and (c) the next volume after motion occurred.
Back to Top
Digital Poster Session - Motion: Brain & Body
Acq/Recon/Analysis
Monday, 17 May 2021 17:00 - 18:00
  • Prospective motion-corrected three-dimensional multiparametric mapping of the brain
    Shohei Fujita1,2, Naoyuki Takei3, Akifumi Hagiwara1, Issei Fukunaga1, Dan Rettmann4, Suchandrima Banerjee5, Ken-Pin Hwang6, Shiori Amemiya2, Koji Kamagata1, Osamu Abe2, and Shigeki Aoki1
    1Department of Radiology, Juntendo University, Tokyo, Japan, 2Department of Radiology, The University of Tokyo, Tokyo, Japan, 3MR Applications and Workflow, GE Healthcare, Tokyo, Japan, 4MR Applications and Workflow, GE Healthcare, Rochester, MN, United States, 5MR Applications and Workflow, GE Healthcare, Menlo Park, CA, United States, 6Department of Radiology, MD Anderson Cancer Center, Houston, TX, United States
    High linearity of T1 and T2 values in a phantom was obtained with and without motion correction. The repeatability and accuracy of T1 and T2 quantification were improved under in-plane and through-plane motions.
    Figure 3. Representative motion tracking and quantitative maps of a healthy volunteer with intentional in-plane (“side-to-side”) head motions. The head motion was rigidly tracked in three translational and three rotational directions. (a) Motion tracking time curve of translations and rotations in the x-y-z coordinate system. (b) T1 and T2 maps acquired with the proposed method and those without motion correction are shown.
    Figure 5. Effect of motion correction on regional quantitative values. (a) Bland-Altman plots representing the bias of scans with motion with and without motion correction compared with scans without motion as references. Data points with motion correction are closer to zero than those without, indicating smaller bias achieved by motion correction. (b) Coefficients of variation (CV) represent the repeatability of the scans. In both T1 and T2, within-subject CVs were smaller, indicating higher repeatability, in motion-corrected scans than those without motion correction.
  • 3D rigid motion correction for navigated interleaved simultaneous multi-slice DWI
    Malte Riedel (né Steinhoff)1, Kawin Setsompop2,3,4, Alfred Mertins1, and Peter Börnert5,6
    1Institute for Signal Processing, University of Lübeck, Lübeck, Germany, 2Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States, 3Department of Radiology, Harvard Medical School, Boston, MA, United States, 4Harvard‐MIT Health Sciences and Technology, MIT, Cambridge, MA, United States, 5Philips Research, Hamburg, Germany, 6Department of Radiology, C.J. Gorter Center for High-Field MRI, Leiden University Medical Center, Leiden, Netherlands
    The proposed method offers navigated retrospective 3D rigid motion correction per EPI-shot for interleaved SMS brain DWI. Simulations confirm small submillimeter target registration errors. In-vivo DTI results show improved image quality at a high temporal resolution of 3 Hz.
    Figure 4: In-vivo examples from 4-interleave 3-SMS full-volume DWI reconstructions. a: SMS-IRIS and MoSaIC examples. b: Shot-wise shift and rotation parameters for MoSaIC (see color code). In the static case (blue), the parameters are almost constant and the images appear similar. The SMS-IRIS images of the next two examples (orange and green) are visibly blurred by in-plane motion, which is improved by MoSaIC. The last two SMS-IRIS images (red and purple) are mainly affected by through-plane motion, which is detected as x-rotations and y-shifts and improved by MoSaIC.
    Figure 1: MoSaIC scheme for navigated DWI reconstruction per diffusion direction including 3D rigid motion correction. SMS navigators are unfolded using 2D-SENSE. The shots from the first diffusion-weighted TR are assumed motion-free and stacked to the reference volume. The navigator SMS groups are registered by SMS2Vol registration, the shot diffusion phases are extracted and rejection criteria are calculated. All shot parameters are included into a motion-corrected full-volume reconstruction combining the high-resolution image data.
  • Cortical Mapping and T1-Relaxometry using Motion Corrected MPnRAGE: Test-Retest Reliability with and without Motion Correction
    Steven Kecskemeti1, Abigail Freeman1, and Andrew L Alexander1
    1University of Wisconsin-Madison, Madison, WI, United States
    Retrospectively motion corrected MPnRAGE demonstrated high test-retest of R1 relaxometry of the cortex, as well as FreeSurfer measures of cortical thickness, surface area, and volume.  High test-retest of pediatric subjects was found with 100% data acceptance. 
    Figure 1: Example images and motion estimations from a 7.6 year old typically developing female demonstrating MPnRAGE motion correction for moderate to severe motions. Without motion correction, the T1-weighted and quantitative R1 images have blurred tissue boundaries.
    Figure 3. The regional surface maps for the coefficients of variations x 100 without (top) and with (bottom) motion correction.
  • Prospective Motion Corrected Time-of-flight MR Angiography at 3T
    Xiaoke Wang1, Edward Herskovits1, and Thomas Ernst1
    1Diagnostic Radiology, University of Maryland-Baltimore, Baltimore, MD, United States
    In this study, at 3T field strength MRA with optic PMC was tested in phantom and on a healthy volunteer and compared with MRA without PMC. This study demonstrated the potential of optic PMC in improving the quality of MRA on patients with difficulty holding still.
    Figure 5. Axial maximum intensity projections (MIPs). The MIPs are of very high quality with detailed depiction of distal vessels and excellent background suppression in the cases of no intentional motion (A and B). The trained motion causes clear misregistration between vessels in the brain (C, red arrow). Further, many distal vessels are lost on the motion corrupted image (C, yellow arrow), and there are artifactual stenoses (C, green arrow). With prospective motion correction, the misregistration and the distal vessels are mostly recovered, and pseudostenosis is corrected.
    Figure 4. Sagittal (A, B, C, D) and coronal (E,F,G,H) maximum intensity projections (MIPs). Without intentional motion (A, B, E, F), activation of PMC does not substantially alter image quality. With trained motion, some vessels in the middle slab are obscured (yellow arrows) (C,G) without PMC. There are artifactual stenoses between slabs (green arrows). There is also misalignment between slabs (G, red arrow) and discontinuous major vessels, e.g. the middle cerebral arteries (G). In comparison, the obscured vessels are recovered when PMC was on and misalignment alleviated (D, H).
  • Dual-echo volumetric navigators for field mapping and shim correction in MR neuroimaging
    Alan Chu1, Yulin Chang2, André J. W. van der Kouwe3, and M. Dylan Tisdall1
    1Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States, 2Siemens Medical Solutions USA, Inc., Malvern, PA, United States, 3Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
    We demonstrate the feasibility and validity of field mapping using a dual-echo vNav based on a fly-back EPI readout.
    Sagittal field maps generated from the multi-echo FLASH and dual-echo vNav acquisitions shown in Hz. The FLASH field maps were downsampled by a factor of 4 to match the resolution of the vNavs for easier comparison. Note that the field map values have not been unwrapped, so may not be entirely comparable in regions of very high susceptibility due to differences in echo timing between the FLASH and vNav scans.
    Sagittal views of magnitude images from the FLASH and vNav scans, for each of the first and second runs. For the first run, the subject's head was held in a fixed neutral position, and for the second run, the subject's head was held in a fixed upward nodding position.
  • Comprehensive Analysis of FatNav Motion Parameters Estimation Accuracy in 3D Brain Images Acquired at 3T
    Elisa Marchetto1,2, Kevin Murphy1,3, and Daniel Gallichan1,2
    1Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff University, Cardiff, United Kingdom, 2School of Engineering, Cardiff University, Cardiff, United Kingdom, 3School of Physics, Cardiff University, Cardiff, United Kingdom
    The FatNav motion correction technique is shown to be able to correct for a large range of motion artifacts, in case of both smoother and rougher kinds of motion. Even greater robustness is expected by updating the GRAPPA weights throughout the scan.
    Figure 5. Each coloured region in the plot bounds the rotational and translational motion parameters range for each evaluation category after FatNav (left) and without motion correction (right), in case of smooth (top row) and rough motion (bottom row). FatNavs can correct very well for an RMS value, averaged along the three axes, of ~3.7°/3mm and 2°/1.6mm for smooth and rough motion respectively (category 4 boundary); without motion correction, image quality drops much more quickly (~1.2°/1mm).
    Figure 1. Comparison between FatNav volume before and after GRAPPA ‘re-reconstruction’, for four different amounts of motion (tables on the left), to simulate effect of mismatched ACS data: parallel imaging artifacts increase with the amount of motion.
  • Head Motion Tracking in MRI Using Novel Tiny Wireless Tracking Markers and Projection Signals
    Liyuan LIANG1, Chim-Lee Cheung2, Ge Fang2, Justin Di-Lang Ho2, Chun-Jung Juan3,4,5, Hsiao-Wen Chung6, Ka-Wai Kwok2, and Hing-Chiu Chang1
    1Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong, Hong Kong, 2Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, Hong Kong, 3Department of Medical Imaging, China Medical University Hsinchu Hospital, Hsinchu, Taiwan, 4Department of Radiology, School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan, 5Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, 6Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
    In this study, we evaluated the tracking performance of a novel tiny wireless tracking marker by using a linear motion phantom, and tested the feasibility in omnidirectional 3D head motion tracking using three tiny wireless tracking markers. 
    Figure 1. (a) The setup for phantom test. (b) Three markers were stuck on a wooden rod, and then attached on the base plate of the MR motion phantom, which can produce smooth linear motion. (c) Left: wireless tracking marker proposed in reference [2]; Right: marker used in our method. (d-f) Demonstration of our homemade head strap. Markers were placed in plastic holders and then attached on the headbands.
    Figure 4. Tracking results of in-vivo experiments. (a-c) Measured tracking traces along three directions (LR: left-right, SI: superior-inferior, AP: anterior-posterior) when head shaking was performed. (d-g) Measured positions in 3D space for three markers at two selected time points (red and green markers in Fig.4a).
  • Real-time prospective motion correction of arbitrary MR pulse sequences with XPACE-Pulseq
    Maxim Zaitsev1, Michael Woletz1, and Martin Tik1
    1High Field MR Center, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
    Prospective motion correction using external tracking is feasible for arbitrary pulse sequences stored in Pulseq format.
    Figure 1. An example of a gradient echo sequence timing diagram. Red dotted lines mark possible block boundaries. RF pulses and ADC events are not allowed to cross block boundaries; however, gradients are not forced to 0 at the boundaries (see blue circles), allowing for efficient gradient wave forms as needed for the fast sequences. Block arrows on the top mark possible position update points; filled ones correspond to the currently implemented option.
    Figure 2. In vivo images acquired with a 3D gradient echo sequence programmed in Pulseq in presence of head rotations: (left) without prospective motion correction and (right) with prospective motion correction. See Fig. 3 for corresponding motion traces.
  • Measuring extracranial magnetic field changes due to head motion during multi-slice EPI acquisition
    Laura Bortolotti1 and Richard Bowtell1
    1Sir Peter Mansfield Imaging Centre, University of Nottingham, Nottingham, United Kingdom
    Head motion parameters were successfully predicted from measurements of  extra-cranial field changes made during an EPI scan. 
    Experimental set-up. The NMR field probes were placed between the transmit and the receiver Rf coils. The optical camera (MPT, Kineticor) is fixed to the inside of the magnet bore. The positions of the probes are shown in (b).
    The plots show examples of simultaneous measurements of head motion parameters (top row) and magnetic field (bottom row). The subject performed various head movements (rest, head shaking, head nodding, feet wiggling). Measurements were acquired with and without simultaneous EPI scanning.
  • Structure Light based Optical MOtion Tracking system (SLOMO) for Contact-free respiratory Motion Tracking from Neck in MR Imaging
    Chunyao Wang1, Zhensen Chen1, Yishi Wang2, and Huijun Chen1
    1Center for Biomedical Imaging Research, School of Medicine, Tsinghua University, Beijing, China, 2Philips Healthcare, Beijing, China
    This study proposed a parallel line Structure Light based Optical Motion Tracking system (SLOMO) and verified its feasibility in respiratory detection and motion correction in MR liver imaging.
    Fig. 1 The setup of SLOMO system for abdominal imaging. SLOMO system consist of frame holder (a.) and optical module (b). Optical module is a camera-laser system for depth imaging
    Fig. 4 Reconstruction results of motion corrupted and corrected images. Liver images were reconstructed into 4 phases according to the respiratory curves detected by bellow and SLOMO system.
  • Respiratory resolved and corrected 3D $$$\Delta\text{B0}$$$ mapping and fat-water imaging at 7 Tesla
    Sebastian Dietrich1, Johannes Mayer1, Christoph Stephan Aigner1, Christoph Kolbitsch1, Jeanette Schulz-Menger2,3,4, Tobias Schaeffter1,5, and Sebastian Schmitter1,6
    1Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berin, Germany, 2Charité Medical Faculty University Medicine, Berlin, Germany, 3DZHK partner site Berlin, Working Group on Cardiovascular Magnetic Resonance, Experimental and Clinical Research Center (ECRC), Berlin, Germany, 4Department of Cardiology and Nephrology, HELIOS Klinikum Berlin Buch, Berlin, Germany, 5Department of Medical Engineering, Technische Universität Berlin, Berlin, Germany, 6University of Minnesota, Center for Magnetic Resonance Research, Minneapolis, MN, United States
    Respiratory-resolved and corrected 3D $$$\Delta B0$$$ mapping for 3D fat-water separated cardiac magnetic resonance imaging at ultra high fields is presented.
    Fig.4: Resulting in vivo fat-water and fat fraction ($$$\text{FF}$$$) images of a coronal slice are shown for non-respiratory, non-cardiac resolved (NRR) and respiratory motion-corrected, cardiac binned (MOCO) reconstruction. The latter is shown in the end-diastolic phase. The red arrows indicating a blood flow artifact visible in NRR and reduced in MOCO. Line plot position is indicated by the dotted lines (I) and (II) with increased $$$\text{FF}$$$ for MOCO up to $$$24\%$$$ and reduced blood flow artifacts by the factor of 2 compared with NRR.
    Fig.5: Fat-water and fat fraction ($$$\text{FF}$$$) images for 3 orthogonal views are shown for all 10 volunteers.
  • Minimizing motion artifacts in myocardial quantitative mapping by combined use of motion-sensitive CINE imaging and FEIR
    Takumi Ogawa1, Michinobu Nagao2, Masami Yoneyama3, Yasutomo Katsumata3, Yasuhiro Goto1, Isao Shiina1, Yutaka Hamatani1, Kazuo Kodaira1, Mamoru Takeyama1, Isao Tanaka1, and Shuji Sakai2
    1Department of Radiological Services, Women's Medical University Hospital, tokyo, Japan, 2Department of Diagnostic imaging & Nuclear Medicine, Women's Medical University Hospital, tokyo, Japan, 3Philips Japan, tokyo, Japan
    The combined use of Motion-Sensitive (MoSe) CINE imaging for determining accurate TD setting and fast elastic image registration (FEIR) could minimizing the influence of cardiac motion-related artifacts.
    Figure.2 A comparison MOLLI T1mapping with/without MoSe-CINE approach and FEIR. Both conventional MoSE-CINE visual approach with FEIR clearly improved the accuracy on T1 confidence map and the combination of MoSE-CINE visual approach with FEIR showed the best image quality.
    Figure.1 MoSe-CINE images allows direct visualization of motion-independent cardiac phase timing. It shows depicted signal decrease due to cardiac motion and it pointed out when is the best timing to trigger for both systolic and diastolic timings.
  • Model-based motion correction outperforms a model-free method in quantitative renal MRI
    Fotios Tagkalakis1, Kanishka Sharma2, Irvin Teh1, Bashair al-Hummiany1, David Shelley1, Margaret Saysell3, Julie Bailey3, Kelly Wroe3, Cherry Coupland3, Michael Mansfield3, and Steven Sourbron2
    1University of Leeds, Leeds, United Kingdom, 2University of Sheffield, Sheffield, United Kingdom, 3Leeds Teaching Hospitals NHS Trust, St James's Hospital, United Kingdom
    Model-driven registration is faster and more effective than model-free registration for motion correction in multiparametric, quantitative MRI of the kidney.
    Figure 3. Comparison of computational times in minutes per patient (1-10) for GFMR (red) and MDR (blue) methods on T1 (top), DTI (middle) and DCE (bottom).
    Figure 2. Distribution of pixel-based median metrics (one per row) for all 3 contrast mechanisms (one per column) and for each individual subject (horizontal axis). Plots show median +/- standard deviation for uncorrected data (green), GFMR (red) and MDR (blue).
  • Motion-robust T2-weighted TSE imaging in the prostate by performing non-rigid registration between averages
    Katja Bogner1, Elisabeth Weiland2, Thomas Benkert2, and Karl Engelhard1
    1Institute of Radiology, Martha-Maria Hospital, Nuremberg, Germany, 2MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
    T2-weighted imaging has high relevance in prostate MRI but is prone to motion-induced blurring caused by slight displacements between averages. Non-rigid registration before averaging results in reduced motion artifacts and improves image quality and diagnostic validity. 

    Figure 1: Typical motion in T2-weighted imaging with averaging
    (a-c) single average with degraded image quality and displacement shift
    (d) combination of all averages without MOCO
    (e) combination of all averages with MOCO and improved image quality

    Figure 2: Improved image quality of MOCO
    (a) conventional reconstruction (Likert-score 3)
    (b) MOCO (Likert-score 1)

  • Motion Correction of Abdominal Diffusion-Weighted MRI Using Internal Motion Vectors
    Michael Bush1, Thomas Vahle2, Uday Krishnamurthy1, Thomas Benkert2, Xiaodong Zhong1, Bradley Bolster1, Paul Kennedy3, Octavia Bane3, Bachir Taouli3, and Vibhas Deshpande1
    1Siemens Medical Solutions USA, Inc., Malvern, PA, United States, 2Siemens Healthcare GmbH, Erlangen, Germany, 3The Department of Radiology and Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mt. Sinai, New York, NY, United States
    Motion vectors derived from non-rigid registration of low b-value diffusion volumes can be used to correct for motion in higher b-values. Initial results suggest the proposed method can produce images similar in quality to respiratory gating, while maintaining reduced acquisition times.
    Figure 1. Flow diagram of the iMoCo process. Redundant sampling of the b50 volumes results in well-filled motion states, allowing for accurate non-rigid registration. Motion vectors produced by the non-rigid registration are then applied to the remaining b-value volumes, which do not require redundant sampling.
    Figure 3. Motion Phantom (MP) and Healthy Volunteer ADC Maps (Voxel size 1.5x1.5x5.0 mm3, PAT 2, Matrix Size 128x104x35, FOV 380 mm, TR 6.1 s, TE 56 ms, b50-16 avgs, b800-16 avgs, BW 2300 Hz/Px). Motion Phantom diffusion values are most comparable between the gold-standard Gated and iMoCo maps.
  • High Resolution PET/MR Imaging Using Anatomical Priors & Motion Correction
    Mehdi Khalighi1, Timothy Deller2, Floris Jansen2, Mackenzie Carlson3, Tyler Toueg4, Steven Tai Lai1, Dawn Holley1, Kim Halbert1, Elizabeth Mormino4, Jong Yoon1, Greg Zaharchuk1, and Michael Zeineh1
    1Radiology, Stanford University, Stanford, CA, United States, 2Engineering Dept., GE Healthcare, Waukesha, WI, United States, 3Bioengineering, Stanford University, Stanford, CA, United States, 4Neurology, Stanford University, Stanford, CA, United States
    PET image reconstruction with anatomical priors is used within the framework of rigid motion correction for PET/MR brain images to address the co-registration problem between anatomical priors and PET coincident events. The results show improved image resolution in addition to higher SNR.
    Figure 2: Comparison of 11C-UCBJ PET images reconstructed with conventional BSREM (top row), MR guided BSREM using the anatomical priors without motion correction (middle row) and MR guided BSREM with motion correction (bottom row). MR guided BSREM shows better SNR and higher image resolution compared to the BSREM method; however, as shown by red arrows, incorporating motion correction into MR guided BSREM (bottom row), results in a sharper image with crisper edges.
    Figure 4: Comparison of 18F-PI2620 PET images reconstructed with conventional BSREM (top row), MR guided BSREM using the anatomical priors without motion correction (middle row) and MR guided BSREM with motion correction (bottom row). MR guided BSREM shows better SNR and higher image resolution compared to the BSREM method and as shown by red arrows (e.g., pituitary gland and right occipital & parietal cortex), incorporating motion correction into MR guided BSREM (bottom row), results in a sharper image; however, because of the lower counts on this exam, less improvement is observed.
  • Ultra-wide-band radar for respiratory motion correction of T1 mapping in the liver
    Tom Neumann1, Juliane Ludwig1, Kirsten M. Kerkering1, Frank Seifert1, and Christoph Kolbitsch1
    1Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
    Respiratory motion correction of T1 mapping in the liver based on a calibrated ultra-wide-band radar signal. A linear model was used to predict respiratory motion during data aqcuisition and improve the image quality in T1 maps.

    Figure 4: The motion corrected T1 maps show an increase in image quality. Respiratory blurring especially at the dome of the liver and around blood vessels could be reduced yielding similar image quality than the breathholdscan. Although the motion model was built for the liver, also the visualization of the kidneys is improved. It should be noted, that the breathhold data cannot be directly compared to motion corrected images, because they were acquired in two separate scans.

    Figure 1: Calibration: The M-sequence STx gets transmitted by a sending antenna Tx and correlated with the received response SRx to create an impulse response Rxy. The principal components of Rxy get linearly fitted to the registered changes in a selected region of interest in the dynamic scan. Correction: Based on the motion model, radar signals obtained simultaneously to the T1 mapping sequence are used to predict respiratory motion shifts which can then be utilized during image reconstruction to correct for motion artefacts.

  • Detecting Respiratory Motion Using Accelerometer Sensors: Preliminary Insight
    Eddy Solomon1,2, Syed Saad Siddiq1,2, Daniel K Sodickson1,2, Hersh Chandarana1,2, and Leeor Alon1,2
    1Radiology, New York University School of Medicine, New York, NY, United States, 2New York University Grossman School of Medicine, New York, NY, United States
    MRI-compatible accelerometers for tracking respiratory motion showed reliable results in tracking motion when compared to conventional k-space self-navigation. Its small dimensions and flexible high sampling rate offer great potential for tracking of breathing signals.
    Figure 2. Experiment setup with the accelerometer placed on top of the abdomen.
    Figure 5. 3D view of data binned by k-space center (left column), accelerometer (middle column) and Pilot-Tone RF transmitter (right column). Data binned by the three methods were found to be in good agreement. Additionally, data binned using the accelerometer signal showed fine tissue boundaries (green arrow) and data binned using Pilot-Tone showed finer liver anatomical details (yellow arrows).
  • Detection of Head Motion using Navigators and a Linear Perturbation Model
    Thomas Ulrich1 and Klaas Paul Pruessmann1
    1Institute for Biomedical Engineering, ETH Zurich and University of Zurich, Zurich, Switzerland
    Our algorithm achieved high accuracy and precision during the phantom experiment. RMS error was about 25 micrometers for all translation directions, and 0.04 degrees around all rotation axes. It was also able to estimate motion of our volunteer during the in-vivo experiments. 
    Orbital navigator k-space trajectory. Left: Parametric plot of the trajectory shape. Right: Plots of the trajectory, gradients, and slew rate over time. The trajectory is made up of three orthogonal circles, with smooth transitions in between them. At a radius of 200 rad/m, the navigator gradients can be executed in about 1.65 milliseconds.
    Sequence diagram of a 3D T2*-weighted FFE sequence with 3D orbital navigator gradients inserted after the excitation and before the phase-encoding gradients.
  • Effects of geometric distortions on navigator accuracy for motion corrected brain imaging at 7T
    Mads Andersen1 and Vincent Oltman Boer2
    1Philips Healthcare, Copenhagen, Denmark, 2Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Hvidovre, Denmark
    Brain imaging at 7T can benefit from motion correction. EPI can reduce navigator durations. We investigated the accuracy for volume navigators of different resolutions and EPI readout durations. The realignment error grows with the size of motion, voxel size, and EPI readout duration.
    Figure 1: Examples of the simulated navigators. Water navigators left, fat navigators to the right. Fat navigators were not simulated for echo times of 10 ms and larger, because of the short T2* of fat.
    Figure 5: The fit value (see figure 4) at 10 mm motion score of the golden standard, for all navigators simulated. For each resolution the readout durations (RO dur.) and echo times correspond to (from left to right): No EPI, EPI SENSE 5, EPI SENSE 3, EPI SENSE 1.