Machine Learning Applications in CV Imaging
Cardiovascular Wednesday, 19 May 2021

Oral Session - Machine Learning Applications in CV Imaging
Cardiovascular
Wednesday, 19 May 2021 12:00 - 14:00
  • Improving deep unrolled neural networks for radial cine cardiac image reconstruction using memory-efficient training, Conv-LSTM based network
    Kanghyun Ryu1, Christopher M. Sandino1, Zhitao Li1, Xucheng Zhu2, Andrew Coristine3, Martin Janich4, and Shreyas S. Vasanawala1
    1Stanford University, Stanford, CA, United States, 2GE Healthcare, Menlo Park, CA, United States, 3GE Healthcare, Montreal, QC, Canada, 4GE Healthcare, Munich, Germany
    We propose two methods to improve current UNN for Non-Cartesian cine cardiac image reconstruction. We use a memory efficient method for training to increase the number of unrolls. Also we use a novel network architecture based on Convolutional LSTM.
    Figure 1. Overview of the MoDL-based UNN framework for radial CINE image reconstruction used in the study. Sensitivity maps are derived with ESPIRiT method, proximal block is designed with the proposed network architecture, data-consistency block (DC) is designed with conjugate gradient descent iterations.
    Figure 3. Representative results of radial CINE reconstructions from synthetized images in short-axis view. The data was 18.8-fold accelerated with the trajectory shown in the bottom-left. The proposed method (number of unrolls=10, ConvLSTM) is compared to inverse gridding, Short CNN (number of unrolls=5, Conv2D+1D) and Long CNN (number of unrolls=10, Conv2D+1D). Proposed method provides improved reconstruction quality and sharper cardiac motion in the y-t profile image.
  • Development, Validation, and Application of an Automated Deep Learning Workflow for Strain Analysis based on cine-MRI
    Manuel A. Morales1,2, Maaike van den Boomen2,3,4, Christopher Nguyen2,4, Jayashree Kalpathy-Cramer2, Bruce R. Rosen1,2, Collin Stultz 1,5,6, David Izquierdo-Garcia1,2, and Ciprian Catana2
    1Health Sciences & Technology, Massachusetts Institute of Technology, Cambridge, MA, United States, 2Radiology, Athinoula A. Martinos Center for Biomedical Imaging, MGH, HMS, Charlestown, MA, United States, 3Radiology, University Medical Center Groningen, Groningen, Netherlands, 4Cardiovascular Research Center, MGH, HMS, Charlestown, MA, United States, 5Cardiology, Massachusetts General Hospital, Boston, MA, United States, 6Electrical Engineering and Computer Science, MIT, Cambridge, MA, United States
    We developed and validated against tagging-MRI, an automated workflow for global and regional myocardial strain analysis to quantitatively characterize cardiac mechanics based on ubiquitously acquired cine-MRI data. Applications in patients showed global and asymmetric abnormalities.
    Overview of proposed workflow. VCN centers and crops the input pair of cine-MRI frames. Tissue labels generated by CarSON are used to build an anatomical model. Motion estimates derived from CarMEN are used to calculate strain measures, and these estimates are combined with the anatomical model to enable global and regional strain analyses.
    Subject-wise regional application. Circumferential end-systolic strain (ESS) in healthy and myocardial infarction subjects shows that infarcts (red arrows) can result in diffused (center) and focal (right) strain reduction.
  • MyoMapNet: A Deep Neural Network for Accelerating the Modified Look-Locker Inversion Recovery Myocardial T1 Mapping to 5 Heart Beats
    Hossam El-Rewaidy1,2, Rui Guo1, and Reza Nezafat1
    1Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, United States, 2Graduate School of Bioengineering, Department of Computer Science, Technical University of Munich, Munich, Germany
    A deep artificial neural network (MyoMapNet) enables fast and precise myocardial T1 mapping quantification from only 4-5 T1-weighted images collected after a single inversion pulse, leading to shorter scan time and breath-holds of 4-5 heartbeats.
    Figure 1. MyoMapNet architecture: MyoMapNet uses a fully-connected neural network for estimating voxel-wise T1 values from T1-weighted images collected after a single look-locker inversion pulse. For each voxel, the signal values from 5 T1-weighted images are concatenated with their corresponding look-locker times and used as the network input (i.e. 10×1) for native T1 mapping. The input values are fed to a fully-connected network with 5 hidden layers with 400, 400, 200, 200, and 100 nodes each layer, respectively. The output is the estimated T1 value at each voxel.
    Figure 3. Native T1 maps from three patients, reconstructed using MOLLI-5 (using only 5 T1 weighted images with 3-parameter fitting), MyoMapNet, and MOLLI-5(3)3 with a 3-parameter fitting model. MoyMapNet yield maps with more homogenous signal compared to MOLLI-5.
  • Deep-learning based super-resolution reconstruction for 3D isotropic coronary MR angiography in a one-minute scan
    Thomas Küstner1,2, Alina Psenicny1, Camila Munoz1, Niccolo Fuin3, Aurelien Bustin4, Haikun Qi1, Radhouene Neji1,5, Karl P Kunze1,5, Reza Hajhosseiny1, Claudia Prieto1, and René M Botnar1
    1School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom, 2Department of Radiology, Medical Image and Data Analysis (MIDAS), University Hospital of Tübingen, Tübingen, Germany, 3Ixico, London, United Kingdom, 4IHU LIRYC, Electrophysiology and Heart Modeling Institute, Université de Bordeaux, INSERM, Centre de recherche Cardio-Thoracuique de Bordeaux, Bordeaux, France, 5MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
    The proposed deep learning-based super-resolution reconstructs a high-resolution image (1.2mm3) from a low-resolution input (1.2x4.8x4.8mm3) enabling coronary MR angiography acquisitions in a one-minute scan.
    Fig. 1: Proposed generative adversarial super-resolution (SR) framework with cascaded Enhanced Deep Residual Network for SR (EDSR) generator, trainable discriminator and perceptual loss network. Non-rigid motion-compensated CMRA data is acquired to form the low-resolution (1.2x3.6x3.6mm3 or 1.2x4.8x4.8mm3; superior-inferior x left-right x anterior-posterior) input image/patch which is reconstructed to the high-resolution output (0.9mm3 or 1.2mm3).
    Fig. 2 [animated]: Prospective SR reconstruction: Coronal and coronary reformat of low-resolution acquisition (1.2x4.8x4.8mm3) acquired in ~50s, high-resolution acquisition (1.2mm3) acquired in ~7min, bicubic interpolation (1.2mm3), and proposed super-resolution reconstruction (1.2mm3) in a patient with suspected CAD for a prospective acquired low-resolution scan of ~50s scan time (prospective cohort).
  • Fully automated aortic 4D flow MRI large-cohort analysis using deep learning
    Michael B Scott1, Haben Berhane1, Justin Baraboo1, Cynthia K Rigsby2, Joshua D Robinson2, Patrick M McCarthy1, S Chris Malaisrie1, Ryan J Avery1, Bradley D Allen1, Alexander Barker3, and Michael Markl1
    1Northwestern University, Chicago, IL, United States, 2Lurie Children's Hospital of Chicago, Chicago, IL, United States, 3University of Colorado, Anschutz Medical Campus, Aurora, CO, United States
    An automated CNN-based pipeline demonstrated in more than 2000 datasets can automatically preprocess, segment, and analyze aortic 4D flow MRI.
    Figure 1: Top: Pipeline. 1. 4D flow data is pulled from server 2. Dicoms are loaded and the velocity standard deviation is input into the eddy current and noise masking CNNs. 3. Velocity data is fed into the antialiasing CNN. 4. Phase contrast MR angiogram (PCMRA) is calculated and input into the segmentation CNN. 5. Outputs are generated. 6. Results pushed back to the server. Bottom: example outputs. A: PCMRA, B: segmentation preview, C: velocity MIP, D: data preview, E: mean velocity flow curve.
    Figure 2: Example outputs. Top left: healthy subject, Top right: bicuspid aortic valve (BAV) and coarctation patient. Bottom left: BAV and stenosis patient, Bottom right: tricuspid valve patient. For each patient, peak systolic velocity maximum intensity projections for manual workflow (left) and CNN pipeline (right) are shown, as well as mean velocity curves for manual (red) and CNN (blue). In the 3D mask comparison, grey is shared, red volumes are larger in the manual, and blue volumes are larger in the CNN.
  • Validation of a Deep Learning based Automated Myocardial Inversion Time Selection for Late Gadolinium Enhancement Imaging in a Prospective Study
    Seung Su Yoon1,2, Michaela Schmidt2, Manuela Rick2, Teodora Chitiboi3, Puneet Sharma3, Tilman Emrich4,5, Christoph Tilmanns6, Ralph Waßmuth6, Jens Wetzl2, and Andreas Maier1
    1Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany, 2Magnetic Resonance, Siemens Healthcare GmbH, Erlangen, Germany, 3Siemens Medical Solutions USA, Inc., Princeton, NJ, United States, 4Department of Radiology, University Medical Center, Johannes Gutenberg-University Mainz, Mainz, Germany, 5Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, United States, 6Diagnostikum Berlin, Berlin, Germany
    To standardize and automate the selection of correct inversion time to null healthy myocardium, we propose an automated deep-learning-based system and validate the system with a prospective study. The system achieved high accuracy in the range of observers’ annotation.
    Figure 1: Overview of the proposed system based on an example. A SAX TI scout series is used as an input for the system. By applying the localization, style transfer and segmentation network, the time point where the mean pixel intensities from myocardium signal is minimum is selected as TInull. By examining the 80ms window starting from the TInull, the time point where the difference between the average LV, RV blood pool and myocardium signal is highest, is selected as TIcontrast.
    Figure 4: Qualitative results of the system output and the observers' annotation. The illustrated images show the first 16 phases of the standardized TI scout series. In a), b) the results on 1.5T are shown. In c), d) the results of 3.0T data are shown. In d) the observer 2 was selected one frame later than TIcontrast. However, the deviation is negligible. In e) series acquired without- while in f) with compressed sensing on the same patient with 4min 30s in between.
  • Voxel-wise Tracking of Grid Tagged Cardiac Images using a Neural Network Trained with Synthetic Data
    Michael Loecher1,2, Luigi E Perotti3, and Daniel B Ennis1,2,4,5
    1Radiology, Stanford University, Stanford, CA, United States, 2Radiology, Veterans Affairs Health Care System, Palo Alto, CA, United States, 3Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States, 4Maternal & Child Health Research Institute, Stanford University, Stanford, CA, United States, 5Cardiovascular Institute, Stanford University, Stanford, CA, United States
    This work introduces a neural network for tracking myocardial motion in cine grid tagged MRI images on a voxel-by-voxel basis, which is trained from a large synthetic motion dataset.  Voxel-wise displacement tracking is demonstrated, as well as strain values that show improved quality.
    Figure 2: Animation of the tracking network output. The left panel shows a cropped image of the input grid tagged data. This middle panel shows the tracked points overlayed on the image throughout cardiac cycle. The right panel shows the displacement vectors of the tracked points (only 25% of points included for visibility).
    Figure 4: A) Ecc maps for both tracking methods, where similar values can be seen, with less blurring on the voxel tracked map. B) Corresponding Ecc curve from (A), where very similar values are seen. C) Err maps from both methods, where voxel tracking corresponds to higher values and less blurring. D) Err curves for this case, where higher Err is evident.
  • Prediction of aneurysm stability using a machine learning model based on 4D-Flow MRI and Black Blood MRI
    Miaoqi Zhang1, Mingzhu Fu1, Hanyu Wei1, Shuo Chen1, and Rui Li1
    1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, Beijing, China
    intracranial aneurysm, 4D-Flow MRI, black blood MRI, Support Vector Machines model
    Figure.1 Flow pattern visualization of the intracranial aneurysm (IA) and adjacent parent artery (APA) were performed by (A) streamlines, maximum velocity point and the largest cross-section of IA were showed in (B) APA and (D) IA. Hemodynamic measurements within contours were conducted in (C) APA and (E) IA.
    Figure.3 The receiver operating characteristic (ROC) curves of logistic regression models: (A) Single characteristic predict stable IA using GLM model, (B) hemodynamic characteristics predict stable IA using GLM model (C) six significant characteristics predict stable IA using GLM model. (D-F) SVM model respectively. (D) Single characteristic predict stable IA using SVM model, (E) hemodynamic characteristics predict stable IA using SVM model (E) six significant characteristics predict stable IA using SVM model.
  • Intracranial Vessel Wall Segmentation with 2.5D UNet++ Deep Learning Network
    Hanyue Zhou1, Jiayu Xiao2, Debiao Li1,2, Dan Ruan1,3, and Zhaoyang Fan2,4,5
    1Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States, 2Cedars-Sinai Medical Center, Los Angeles, CA, United States, 3Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, United States, 4Radiology, University of Southern California, Los Angeles, CA, United States, 5Radiation Oncology, University of Southern California, Los Angeles, CA, United States
    We developed a deep learning method that utilized 2.5D UNet++, with a loss function contains soft Dice coefficient loss and distance-transform-approximated Hausdorff distance loss. The developed network has further enhanced the segmentation performance across metrics from the baseline.
    Table I. Quantitative comparison of all models
    Fig. 2. Visualization of model performance comparison where the proposed method achieved a better result: dashed block (a) and (b) are two 3-slice examples from a vessel segment. The 1st column is the original consecutive MRI slices (s1, s2, and s3), the 2nd to the last columns show the ground truth and estimated segmentation from each model of the corresponding MRI slice, respectively. Black represents the background, grey represents the vessel wall, and white represents the lumen.
  • Machine Learning aided k-t SENSE for fast reconstruction of highly accelerated PCMR data
    Grzegorz Tomasz Kowalik1, Javier Montalt-Tordera1, Jennifer Steeden1, and Vivek Muthurangu1
    1Institute of Cardiovascular Science, University College London, London, United Kingdom
    In general, the ML aided k-t SENSE generated flow curves that were visually sharper. There were no statistical differences in peak velocities and stroke volumes. The technique enabled ~3.6x faster processing than the CS reconstruction making it suitable for the clinical use.

    Fig. 2. The ML aided k-t SENSE processing.

    Stage I – the $$$M_{x,f}^2$$$ estimation. Both flow encoded ($$$y_{k,t}^{'}$$$) and compensated ($$$y_{k,t}^{''}$$$) data were processed as described [2]. The u-net results were combined for the final x-f signal estimation. Stage II – k-t SENSE: the linear conjugate gradient solver was used to minimise [1] and produce the final PCMR results.

    Fig. 3. Imaging results.

    $$$U_w$$$ reconstructions presented with smaller or larger artefacts: visible reconstruction patch boundary, signal removal. These are not visible on the $$$U_w^M$$$ results. In two cases $$$U_w$$$ removed heart structures (i.e. the bottom row). In these hard cases temporal blurring can be observed in the $$$U_w^M$$$ results. This had a small effect on the k-t SENSE magnitude results. However, it resulted in blurring of the extracted phase data Fig. 4.

Back to Top
Digital Poster Session - Machine Learning Applications in CV Imaging I
Cardiovascular
Wednesday, 19 May 2021 13:00 - 14:00
  • Automatic segmentation of middle cerebral artery plaque based on deep learning
    Shuai Shen1,2,3,4, Xiao Liu5, Zhuyuerong Li5, Tao Jiang5, Hairong Zheng1,3,4, Xin Liu1,3,4, and Na Zhang1,3,4
    1Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, shenzhen, China, 2College of Software, Xinjiang University, Urumqi, China, 3Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, shenzhen, China, 4CAS key laboratory of health informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, shenzhen, China, 5Department of radiology, Beijing Chao-Yang hospital, Capital medical university, beijing, China
    The study verifies the effectiveness of using neural networks to segment cerebral artery plaques. Both models can effectively complete the segmentation of atherosclerosis. In addition, all parameters of V-net are higher than U-net, and experiments show that V-net is more stable.
    Figure 1 Representative images of the segmentation results of the two deep learning models (U-net and V-net).
    Table 1 the three quantitative indicators of the model to reflect the accuracy of the results.
  • Automated Vessel Segmentation for 2D Phase Contrast MR Using Deep Learning
    Ning Jin1, Maria Monzon2, Teodora Chitiboi3, Aaron Pruitt4, Daniel Giese2, Matthew Tong5, and Orlando P Simonetti5,6,7
    1Cardiovascular MR R&D, Siemens Medical Solutions USA, Inc., Cleveland, OH, United States, 2Siemens Healthcare, Erlangen, Germany, 3Siemens Medical Solutions USA, Inc, Princeton, NJ, United States, 4Biomedical Engineering, The Ohio State University, Columbus, OH, United States, 5Internal Medicine, The Ohio State University, Columbus, OH, United States, 6Davis Heart & Lung Research Institute, The Ohio State University, Columbus, OH, United States, 7Radiology, The Ohio State University, Columbus, OH, United States
    We developed a fully automated segmentation algorithm for phase-contrast MR images using deep learning (DL). Automated segmentation of aorta and main pulmonary artery from PC MRI scans can be successfully achieved using the DL model.
    Figure 2. Representative example of vessel contouring performed by manual and DL segmentation in MPA and AO.
    Figure 1. Schematic representation of the proposed segmentation model. A 2D U-net model with 3 encoder-decoder blocks is trained to regress heatmaps directly from input complex difference images to localize vessel center. A second 2D U-net model with 5 encoder-decoder blocks is trained to segment out the vessel using cropped magnitude images as input.
  • The Comparison of denoising methods for cardiac diffusion tensor imaging
    Xi Xu1, Yuxin Yang1, Yuanyuan Liu1, Dong Liang1, Hairong Zheng1, and Yanjie Zhu1
    1Shenzhen Institute of Advanced Technology, ShenZhen, China
    We evaluate three different image denoising methods in cardiac diffusion tensor imaging (CDTI) regarding image quality and accuracy of parameter estimates with simulation and ex-vivo experiments. 
    Fig.1 Simulated DWI image corresponding to a noise-free simulation randomly selected, the noisy image of SNR = 5~25 after adding Gaussian white noise, and the results after denoising
    Fig.2 Parameter estimations of helix angle (HA) from the simulated, noisy and denoised (ANLM, LPCA, MPPCA) images.
  • Isovolumic Relaxation Time and e’ Metrics Evaluated by Deep-learning Analysis of Long-axis Cine: Correlations to Atrial Pressure and Fibrosis
    Dana Peters1, Jérôme Lamy1, Felicia Seemann2, Einar Heiberg3, and Ricardo Gonzales1
    1Yale Unversity, New Haven, CT, United States, 2National Institutes of Health, Bethesda, MD, United States, 3Lund University, Lund, Sweden
    We used machine learning to perform difficult analyses, to measure diastolic functional metrics.
    Figure 1: A) Deep-learning identification of valve insertion points on long axis cine (see blue markers, indicated by white arrows). B) Processing of valve locations to obtain IVRT and e’, along with a’ and s’. C) Blinded qualitative analysis of atrial LGE: one subject had only mild atrial enhancement (left), while the other had extensive enhancement (right, arrows).
    Figure 2: Extensive LA LGE was associated with longer IVRT time (15 ±4.5% vs. 9.9± 4%, p=0.032), and lower |a’/s’| values (0.78 ±.25 vs. 1.0 ±0.28, p<0.05), reflecting the impact of atrial fibrosis on diastolic function.
  • Comparison of Traditional fSNAP and 3D FuseUnet Based fSNAP
    Chuyu Liu1, Shuo Chen1, and Rui Li1
    1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, Beijing, China
    By adapting 3D FuseUnet, CNN fSNAP showed better performance in lumen and IPH depiction compared with traditional fSNAP. The results suggest that deep learning can help fast SNAP scans produce high quality images, which could have great clinical utility.
    Figure 2. Comparison of fSNAP and CNN fSNAP
    Figure 1. The 3D FuseUnet used in this study. The real and imaginary parts of IR-TFE and fSNAP are used as input 1 and input 2 respectively and the loss for the network is MSE.
  • Deep phenotyping of individuals with arrhythmogenic cardiomyopathy-associated genetic variants using myocardial T1 and T2 mapping
    Eric D Carruth1, Samuel W Fielden1, Amro Alsaid1, Brandon K Fornwalt1, and Christopher M Haggerty1
    1Geisinger, Danville, PA, United States
    In 18 individuals identified with genetic risk for arrhythmogenic cardiomyopathy from population genomic screening, we observed increased native myocardial T1, but unchanged T2, post-contrast T1, and sECV compared with controls.
    Figure 1. Summary of comparisons between genotype-positive (G+) individuals and controls (G?). Values shown are mean±SEM, unless otherwise indicated. LVEF-left ventricular ejection fraction, RVEF-right ventricular ejection fraction, LGE-late gadolinium enhancement, sECV-synthetic extracellular volume. *p<0.05.
    Figure 2. Native T1 maps in A) G? control and B) G+ variant-positive individuals. Representative patients were selected as those with the median native T1 value across each group. While subtle, elevated T1 values in the myocardium of the G+ patient left ventricle (LV) are evident.
  • Unsupervised Tag Removal in Cardiac Tagged MRI using Robust Variational Autoencoder
    Botian Xu1 and John C. Wood1,2
    1Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States, 2Division of Cardiology, Children's Hospital Los Angeles, Los Angeles, CA, United States
    Post processing of cardiac tagged MRI has always been challenging because of poor SNR and image artifact. We treat tags as anomalies and employ robust variational autoencoder (RVAE), which is more robust to outliers, to generate tag-free results from cardiac tagged images.
    Fig. 1. The four chamber view of ground truth and the recovered images from the simulated tagged images. Top row: the original size of the image; bottom row: magnified image of the highlighted area.
    Fig. 2. The four chamber view of real tagged image and the recovered results. Top row: the original size of the image; bottom row: magnified image of the highlighted area.
  • A comparison of spiral trajectories in a deep learning reconstruction for DENSE
    Samuel Fielden1,2, Eric Carruth1, Brandon Fornwalt1,3,4, and Christopher Haggerty1,3
    1Translational Data Science and Informatics, Geisinger, Danville, PA, United States, 2Medical and Health Physics, Geisinger, Danville, PA, United States, 3Heart Institute, Geisinger, Danville, PA, United States, 4Radiology, Geisinger, Danville, PA, United States
    The performance of deep learning-based reconstructions for accelerated DENSE imaging via k-space undersampling is dependent upon the acquisition trajectory.
    Figure 2. Peak systolic magnitude image from one subject, reconstructed with zero filling (ZF) and the DCCNN for all trajectories and acceleration rates 2X and 6X (3X and 4X omitted for space).
    Figure 4. Global radial (top), circumferential (middle), and longitudinal (bottom) strains derived from images reconstructed from each of the three trajectories across all acceleration rates. Statistically significant differences (*p<0.05) from the fully-sampled reference were found for high acceleration rates when employing the CD trajectory.
  • Deep Learning for MR Vessel Wall Imaging: Automated Detection of arterial vessel wall and plaque
    Wenjing Xu1,2,3,4, Xiong Yang5, Yikang Li1,3,4, Jin Fang6, Guihua Jiang6, Shuheng Zhang5, Yanqun Teng5, Xiaomin Ren5, Lele Zhao5, Jiayu Zhu5, Qiang He5, Hairong Zheng1,3,4, Xin Liu1,3,4, and Na Zhang1,3,4
    1Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 2Faculty of Information Technology, Beijing University of Technology, Beijing, China, 3Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 4CAS key laboratory of health informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 5Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China, 6Department of Radiology, Guangdong Second Provincial General Hospital, Guangzhou, China
    we proposed a good performance automatic analysis method for the vessel wall and is important for plaque analysis.
    Figure 1: The architecture of the 2DVet network.
    Figure 4: The representative images and segmentation results from two clinical cases. Case A and Case B represent the images with anterior circulation and posterior circulation, respectively.
  • A Machine Learning Approach for Predicting cardiovascular event in HCM patient on Cardiac MRI
    kankan hao1,2, yanjie zhu1,2, dong liang1,2, shihua zhao3, xin liu1,2, and hairong zheng1,2
    1Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, shenzhen, China, 2University of Chinese Academy of Sciences, Beijing, China, 3Department of Magnetic Resonance Imaging, Fuwai Hospital and National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, beijing, China
    We verify a non linear relationship between the CMR risk factor and cardiovascular event and find compared with the cox regreesion model, the ML model have better performance for predicting cardiovascular event.
    Figure show the ROC for the ML model and cox regression model with a random sample of 80:20. Table show the result of ML model and cox regression model in 5 fold cross validation. C-statistic will be used to evaluate the result of our model in the table.
    table show the non-linear test for strain index. It is clear that there are a non linear relationship of 3D Systolic Apical long Strain.
  • Cross Validation of a Deep Learning-Based ESPIRiT Reconstruction for Accelerated 2D Phase Contrast MRI
    Jack R. Warren1, Matthew J. Middione2, Julio A. Oscanoa2,3, Christopher M. Sandino4, Shreyas S. Vasanawala2, and Daniel B. Ennis2,5
    1Department of Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, United States, 2Department of Radiology, Stanford University, Stanford, CA, United States, 3Department of Bioengineering, Stanford University, Stanford, CA, United States, 4Department of Electrical Engineering, Stanford University, Stanford, CA, United States, 5Cardiovascular Institute, Stanford University, Stanford, CA, United States
    A previously described DL-ESPIRiT network for the reconstruction of highly accelerated 2D Phase Contrast MRI data was evaluated using k-fold cross validation to aid in the understanding of the accuracy and precision of clinically relevant measures of flow.
    Figure 1: Vessel ROI pixel-by-pixel velocity difference compared to FS (% of VENC) measured in percent error for acceleration rates 5-10x. The maximum, minimum, and medians (variance) for both the upper and lower bounds on the 95% confidence intervals are displayed for each acceleration rate (red), as well as the median flow difference (bias) for the 8 folds (blue).
    Table 1: Vessel ROI pixel-by-pixel velocity (% of VENC), peak velocity (%), and total flow (%) differences compared to FS for acceleration rates 5-10x. For each flow metric, the bias (median flow differences) for all 8 folds are reported.
  • Deep Learning Based ESPIRiT Reconstruction for Highly Accelerated 2D Phase Contrast MRI
    Julio A. Oscanoa1,2, Matthew J. Middione2, Christopher M. Sandino3, Shreyas S. Vasanawala2, and Daniel B. Ennis2,4
    1Department of Bioengineering, Stanford University, Stanford, CA, United States, 2Department of Radiology, Stanford University, Stanford, CA, United States, 3Department of Electrical Engineering, Stanford University, Stanford, CA, United States, 4Cardiovascular Institute, Stanford University, Stanford, CA, United States
    2D PC-MRI datasets can be reconstructed with ±5% accuracy for total flow and peak velocity using the proposed Deep Learning based reconstruction methods for acceleration rates up to 8x.
    Figure 1. (A) The original DL-ESPIRiT9 reconstruction pipeline with an unrolled network architecture that alternates between a Data Consistency update and a CNN-based denoising step. (B) The CNN was comprised of (2+1)D convolutional layers as described in 9. Both PC-MRI velocity encodings, $$$I_1$$$ and $$$I_2$$$, were used to generate a quantitative phase difference image using either the proposed (C) Phase Contrast DL-ESPIRiT (PC-DLE) or (D) Complex Difference DL-ESPIRiT (CD-DLE) reconstruction pipelines.
    Figure 2. Representative velocity movies for (A) FS and 8x undersampled data reconstructed using (B) L1E, (C) PC-DLE, and (D) CD-DLE. Pixel-by-pixel velocity difference movies are shown for (E) FS vs. L1E, (F) FS vs. PC-DLE, and (G) FS vs. CD-DLE. Both DL frameworks present significantly lower error (F, G), especially at the cardiac phases with high flow.
  • Exercise Effect in Human Brain Evaluated by 3D SWI Depiction of Lenticulostriate Artery with Denoising Deep Learning Reconstruction and 3D pCASL.
    Vadim Malis1, Won Bae1, Asako Yamamoto1, Yoshimori Kassai2, Marin A McDonald1, and Mitsue Miyazaki1
    1Radiology, UC San Diego, San Diego, CA, United States, 2Canon Medical, Tochigi, Japan
    Mild exercise enhanced delineation and increased length of lenticulostriate artery in high resolution 3D SWI with denoising deep learning reconstruction and increased perfusion measured using 3D pCASL.
    Figure 2: 3D SWI images of the lenticulostriate artery (LSA) region with (a) and without dDLR (b). With dDLR, noises were removed for better depiction of LSA vessels (arrows).
    Figure 3: Width colormap of VESA identified blood vessels before and after exercise.
  • Deep Learning based Automatic Multi-Regional Segmentation of the Aorta form 4D Flow MRI
    Haben Berhane1, Michael Scott1, Justin Baraboo1, Cynthia Rigsby2, Joshua Robinson2, Bradley Allen3, Chris Malaisrie3, Patrick McCarthy3, Ryan Avery3, and Michael Markl1
    1Biomedical Engineering, Northwestern University, Chicago, IL, United States, 2Lurie Childrens Hospital of Chicago, Chicago, IL, United States, 3Northwestern Radiology, Evanston, IL, United States
    A convolutional neural network was trained and validated for the automatic 3D regional segmentation of ascending, arch, and descending aorta, showing excellent Dice scores and agreement to manual flow analysis and interobserver comparisons.
    Figure 1: Workflow. All 4D flow data (Figure 1A) underwent standard 4D flow preprocessing and used to generate 3D phase contract (PC) MRAs (Figure 1B). The 3D PCMRA was used to generate the ground-truth via manual or automated segmentation of the aorta (utilizing a completely independent CNN) and manually labeling the ascending aorta (AAo), arch, and descending aorta (DAo) (Figure 1C). The 3D PCMRA was, also, used as the input for the CNN, generating automated segmentations (Figure 1D). Training and testing were performed through 10-fold cross validation.
    Figure 2: Examples of the Manual and Automated segmentations as well as a difference map between them. Each example showcases a unique geometry of the aorta and distinct placements of the aortic arch. In Figure 2A, the arch is located at the peak of the aorta, while Figure 2B has a wider aortic arch, and for Figure 2C, the arch is slightly left of the top of the aorta. The Dice scores for Figure 2A were AAo: 0.95, arch: 0.95, DAo: 0.98; for Figure 2B, AAo: 0.95, arch: 0.88, DAo: 0.96; and Figure 2C, AAo: 0.95, arch: 0.86, DAo: 0.96
  • Deep-Learning epicardial fat quantification using 4-chambers Cardiac MRI segmentation, comparison with total epicardial fat volume
    Pierre Daudé1, Patricia Ancel2, Sylviane Confort-gouny1, Anne Dutour2, Bénédicte Gaborit2, and Stanislas Rapacchi1
    1Aix-Marseille Univ, CNRS, CRMBM, Marseille, France, 2APHM, Hôpital Universitaire Timone, Service d’Endocrinologie, Marseille, France
    Deep-learning segmentation of epicardial adipose tissue surface in 4CH cine proved the evaluation of this long-overseen biomarker feasible in a database of 126. Networks reached relative surface errors <20% within the upper half of the test set, when 2 observers agreed within 15%.
    Figure 1 : Comparison of total epicardial fat volume against 4-chamber surface measured on systolic or diastolic frame across the three cohorts merged for the database.
    Figure 3 : Representative automated segmentation results for each of EAT surface population quartile. White arrows shows network segmentation errors.
  • Identification of Hemodynamic Biomarkers for Bicuspid Aortic Valve Patients using Machine Learning
    Pamela Franco1,2,3, Julio Sotelo1,3,4, Lydia Dux-Santoy5, Andrea Guala5, Aroa Ruiz-Muñoz5, Arturo Evangelista5, José Rodríguez-Palomares5, and Sergio Uribe1,3,6
    1Biomedical Imaging Center, School of Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile, 2Electrical Engineering Department, School of Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile, 3Millennium Nucleus for Cardiovascular Magnetic Resonance, Santiago, Chile, 4School of Biomedical Engineering, Universidad de Valparaíso, Valparaíso, Chile, 5Department of Cardiology, Hospital Universitari Vall d’Hebron, Vall d’Hebron Institut de Recerca (VHIR), Universitat Autònoma de Barcelona, Barcelona, Spain, 6Radiology Department, School of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
    The clinical significance of BAV disease justifies the need for improved clinical guidelines. Medical imaging has demonstrated the existence of altered hemodynamics. We present a machine learning method consisting of a feature selection mechanism to classify accurately to identify them.
    Figure 1. Features selection using SFS and exhaustive search: (a) Sequential forward selection. There are five selected features, and they correspond to Velocity Angle in AAo, Velocity Angle in AArch, Forward Velocity in AAo, Regurgitation Fraction in pDAo, and Helicity Density in AAo. (b) Feature Space in 3D. The selected features are certain hemodynamic features. We can see that the separability is ‘good’ between both classes (volunteers and BAV patients).
    Table 2. Average accuracy and standard deviation of different combinations of classifiers and features. Each experiment was done using 10 group cross-validation and repeated 10 times with confidence interval 95%.
  • Fast personalization of cardiac mechanical models using parametric physics informed neural networks
    Stefano Buoso1, Thomas Joyce1, and Sebastian Kozerke1
    1ETH Zurich, Zurich, Switzerland
    Physics informed neural networks can be personalized to cardiac MRI data and trained with unsupervised approaches. They can be quickly trained and used to simulate cardiac cycles for various physiological conditions.
    Figure 1. Schematic representation of the PINN. From the MR data, anatomy, microstructure, tissue and circulation properties are defined. A dense neural network is generated with a preselected number of FM bases as the last layer. The bases are set as fixed network weights which are not updated during training. The PINN is trained to provide the deformation consistent with cardiac mechanics and it is then coupled to a lumped-parameter model of the systemic circulation to predict cardiac deformation and the corresponding functional metrics
    Figure 5. Comparison of pressure-volume loops for 4 of the 60 new different anatomies different cases from the PINN (5 hidden layers, 5 neurons per layers, 10 FM basis) FE model
  • PS-VN: integrating deep learning into model-based algorithm for accelerated reconstruction of real-time cardiac MR imaging
    Zhongsen Li1, Hanyu Wei1, Chuyu Liu1, Yichen Zheng1, Shuo Chen1, and Rui Li1
    1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, Beijing, China
    We integrated classical “partial separable” model with "variational network”. The proposed PS-VN architecture is able to reconstruct over 4 thousand image frames in approximately 10 seconds with comparable accuracy with baseline method.
    Figure 1. A schematic illustration of the proposed PS-VN reconstruction pipeline. (a). The part of solving spatial basis images U in classical PS model is substituted by a variational network. (b). An unrolled layer of PS-VN consists of a data fidelity block and a regularization block. The parameters are tuned by backpropagation during network training. (c). PS-VN recovers the corrupted spatial basis images U. AHb is used to be the initial value U(0) as the input of VN. The reconstructed images can be obtained by multiplying spatial basis U with temporal basis V.
    Table 1. Summary statistics of different reconstruction methods. The metrics are averaged over 4200 time frames on the test set. Generally, PS reconstruction show the best nRMSE, PSNR and SSIM; however, it takes around 10 min to reconstruct a single slice. The addition of TV constraints into PS model reduced the reconstruction time to less than 4 min, while at the cost of decrease in PSNR and SSIM. PS-VN produce higher PSNR and SSIM than PS+TV method, and consumes only around 10 seconds.
  • Sensitivity of a Deep Learning Model for Multi-Sequence Cardiac Pathology Segmentation to Input Data Transformations
    Markus J Ankenbrand1, Liliya Shainberg1, Michael Hock1, David Lohr1, and Laura Maria Schreiber1
    1Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Würzburg, Germany
    Sensitivity analysis reveals differential sensitivity of pathological classes to basic image transformations for a published deep learning segmentation model.
    Overlay of predicted segmentation masks over transformed versions of an input image. Each row has images with the same transformation but different parameters (e.g. first row rotations by different angles). The ground truth segmentation mask for this image is shown in Figure 1.
    Quantitative effect on the dice score for each class over the parameter space of three transformations. A: Rotation, positive values denote a rotation counter-clockwise while negative ones denote a rotation clockwise. B: Zoom, a value of 480px is the default scale. Larger values mean zooming out while smaller values mean zooming in. C: Brightness, a value of 0.5 denotes no change in brightness while a value of 0 means completely dark (all black) and 1 means full brightness (all white).
Back to Top
Digital Poster Session - Machine Learning Applications in CV Imaging II
Cardiovascular
Wednesday, 19 May 2021 13:00 - 14:00
  • End-to-end Motion Corrected Reconstruction using Deep Learning for Accelerated Free-breathing Cardiac MRI
    Haikun Qi1, Gastao Cruz1, Thomas Kuestner1, Karl Kunze2, Radhouene Neji2, René Botnar1, and Claudia Prieto1
    1School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom, 2MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
    In this study, we propose an end-to-end deep learning non-rigid motion-corrected reconstruction technique for fast reconstruction of highly undersampled free-breathing CMRA.
    Fig. 5 Reformatted coronary arteries from 9x accelerated CMRA reconstructed using non-rigid PROST and the proposed MoCo-MoDL. First row: the test patient shown in Fig. 4; second row: one of the test healthy subjects.
    Fig. 4 Whole-heart 9x accelerated CMRA from one representative test patient, reconstructed using non-rigid PROST (left) and the proposed MoCo-MoDL (right).
  • Reduction of contrast agent dose in cardiovascular MR angiography using deep learning
    Javier Montalt-Tordera1, Michael Quail1, Jennifer Anne Steeden1, and Vivek Muthurangu1
    1Centre for Cardiovascular Imaging, UCL Institute of Cardiovascular Science, University College London, London, United Kingdom
    Deep learning enables an 80% reduction in contrast agent dose in cardiovascular contrast-enhanced MR angiography while maintaining image quality and clinical validity.
    Figure 5. Representative images from the prospective study. Multiplanar reformats of the ascending aorta (AAO), descending aorta (DAO), main pulmonary artery (MPA), left pulmonary artery (LPA) and right pulmonary artery (RPA).
    Figure 1. (A) Estimation of the intensity ratio between low-dose data (LD-MRA) and high-dose data (HD-MRA). T: thresholding followed by morphological opening. M: compute mean over ROI for both images. (B) Generation of synthetic low-dose (SLD-MRA) images, using the estimated intensity ratio, to be paired with the corresponding high-dose (HD-MRA) images for training.
  • Improving automated aneurysm detection on multi-site MRA data: lessons learnt from a public machine learning challenge
    Tommaso Di Noto1, Guillaume Marie1, Sebastien Tourbier1, Yasser Alemán-Gómez1,2, Oscar Esteban1, Guillaume Saliou1, Meritxell Bach Cuadra1,3, Patric Hagmann1, and Jonas Richiardi1
    1Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 2Center for Psychiatric Neuroscience, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland, 3Medical Image Analysis Laboratory (MIAL), Centre d’Imagerie BioMédicale (CIBM), Lausanne, Switzerland
    Participating in machine learning challenges provides valuable insights for medical imaging problems. In our case, the expedients learnt from a public challenge helped us to improve the sensitivity of our model both on one in-house test dataset and on the challenge test data. 
    MRA orthogonal views of a 31-year-old female subject: blue patches are the ones which are retained in the anatomically-informed sliding-window approach. (top-right): 3D schematic representation of sliding-window approach; out of all the patches in the volume (white patches), we only retain those located in the proximity of the Willis polygon (blue ones).
    (left): 24 landmark points (in pink) located in specific positions of the Willis polygon (white segmentation) in MNI space. (right): same landmark points co-registered to the MRA space of one subject
  • Cardiac MRI feature tracking by deep learning from DENSE data
    Yu Wang1, Sona Ghadimi1, Changyu Sun1, and Frederick H. Epstein1,2
    1Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 2Radiology, University of Virginia, Charlottesville, VA, United States
    A DENSE-trained deep network with through-time correction is a promising new method to predict intramyocardial motion from contour motion.
    Figure 1: Overall concept of using DL of DENSE datasets to predict intramyocardial displacement from contour motion. (A) Training of FlowNet2 using DENSE data, and (B) addition of a through-time correction network.
    Figure 4: Example myocardial displacement movies for FlowNet2, DT-FlowNet2, TC-DT-FlowNet2 and DENSE ground truth.
  • Fully Automated Myocardium Strain Analysis using Deep Learning
    Xiao Chen1, Masoud Edalati2, Qi Liu2, Xingxian Shou2, Abhishek Sharma1, Mary P. Watkins3, Daniel J. Lenihan3, Linzhi Hu2, Gregory M. Lanza3, Terrence Chen1, and Shanhui Sun1
    1United Imaging Intelligence, Cambridge, MA, United States, 2UIH America, Inc., Houston, TX, United States, 3Cardiology, Washington University School of Medicine, St. Louis, MO, United States
    A deep-learning-based fully-automated myocardium strain assessment system is proposed and validated for accurate strain analyses on patient data.
    Workflow of the proposed fully automated cardiac strain and function analyses.
    Summary of global and segmental Ell and Ecc using fastSENC and autoFT for oncology and non-oncology patients. Mean (std) numbers are reported.
  • Deep Learning-Based ESPIRiT Reconstruction for Accelerated 2D Phase Contrast MRI: Analysis of the Impact of Reconstruction Induced Phase Errors
    Matthew J. Middione1, Julio A. Oscanoa1,2, Michael Loecher1, Christopher M. Sandino3, Shreyas S. Vasanawala1, and Daniel B. Ennis1,4
    1Department of Radiology, Stanford University, Palo Alto, CA, United States, 2Department of Bioengineering, Stanford University, Palo Alto, CA, United States, 3Department of Electrical Engineering, Stanford University, Palo Alto, CA, United States, 4Cardiovascular Institute, Stanford University, Stanford, CA, United States
    In this work we analyzed the impact of reconstruction induced phase bias to determine the maximum acceleration factor that could be used with CS and DL reconstruction frameworks for 2D PC-MRI while minimizing errors in peak velocity and total flow within ±5%.
    Figure 1. Overview of the background phase offset correction method. (A) Magnitude and velocity images were used as input to generate masked images of magnitude, velocity, and static tissue using a 60% signal intensity threshold. (B) The resulting static tissue mask was then used to generate a polynomial fit velocity image, which provides an estimated background phase offset image that can be used to correct the acquired velocity image.
    Figure 2. Pixel-by-pixel histogram differences demonstrate the magnitude of the reconstruction induced background phase offset bias, $$$\phi_{R}$$$ (cm/s), for L1E, PC-DLE and CD-DLE (A) within static tissue and (B) inside the vessel ROIs. The median (blue line) and 95%-CIs (lines) of $$$\phi_{R}$$$ are plotted as a function of the acceleration rate.
  • Exploring feature space of MR vessel images with limited data annotations through metric learning and episodic training
    Kaiyue Tao1, Li Chen2, Niranjan Balu3, Gador Canton3, Wenjin Liu3, Thomas S. Hatsukami4, and Chun Yuan3
    1University of Science and Technology of China, Hefei, China, 2Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States, 3Department of Radiology, University of Washington, Seattle, WA, United States, 4Department of Surgery, University of Washington, Seattle, WA, United States
    We explored vessel wall imaging information hidden in MRI images from the OAI dataset. We designed a metric learning network combined with an episodic training method to overcome the problem of limited annotations, and demonstrated its ability to learn a meaningful feature space.
    Figure 2 - The structure of our network. A Conv Block consists of a CNN layer, a batch normalization layer, and an active function ReLU. The kernel size of each convolution layer: 7*7, 3*3, 3*3, 3*3. A feature map with the size of (128, 7, 7) is outputted from the last Conv Block, and will then be reshaped into (128, 1) after max pooling. The number of total param of the network is 129600.
    Figure 4 - Feature map of validation samples with normal/abnormal clusters in 2D space using t-SNE. Each dot represents an average feature of 5 slices in MRI scans, as described in the "inference" part. The colors (red for normal, blue for abnormal (aneurysm), green for ectasia, and yellow for normal validation samples) are painted to show the type of each dot. (a) and (b) show 2 normal validation samples, and (c) and (d) show 2 ectasia samples. The clusters' position can vary due to the t-SNE algorithm, as shown in (c) and (d), but the distance between features will be truly presented.
  • Deep learning-based reconstruction for 3D coronary MR angiography with a 3D variational neural network (3D-VNN)
    Ioannis Valasakis1, Haikun Qi1, Kerstin Hammernik2, Gastao Lima da Cruz1, Daniel Rueckert2,3, Claudia Prieto1, and Rene Botnar1
    1King's College London, London, United Kingdom, 2Technical University of Munich, Munich, Germany, 3Imperial College London, London, United Kingdom
    A 3D variational deep neural network (3D-VNN) for the reconstruction of 3D whole-heart coronary MR angiography (CMRA) to fully capture the spatial redundancies in CMRA images.
    (A) The CMRA data acquisition and motion correction pipeline using a VD-CASPR trajectory and performing translational motion correction estimated from 2D iNAVs. (B) CSMs and the undersampled k-space data are used as network inputs. The variational network structure for one gradient step: the filters k are learned for the real and complex plane and a linear activation function combines the responses of the filters on those planes. The loss function is the MSE of the 3D-VNN reconstruction and the fully sampled.
    CMRA reconstructions for 5-fold undersampling for two representative subjects. 3D-VNN reconstruction is compared against the CS, iterative SENSE, CS and 3D CG MoDL-U-Net for a representative subject. Fully sampled and zero-filled reconstructions are also included for comparison.
  • Generalizability and Robustness of an Automated Deep Learning System for Cardiac MRI Plane Prescription
    Kevin Blansit1, Tara Retson1, Naeim Bahrami2, Phillip Young3,4, Christopher Francois3, Lewis Hahn1, Michael Horowitz1, Seth Kligerman1, and Albert Hsiao1
    1UC San Diego, La Jolla, CA, United States, 2GE Healthcare, Menlo Park, CA, United States, 3Mayo Clinic, Rochester, MN, United States, 4Mayo, Rochester, MN, United States
    An automated deep learning system is capable of prescribing cardiac imaging planes comparable to those acquired by dedicated cardiac technologists, and is robust across MRIs from multiple sites and field strengths. 
    Schematic of automated, multi-stage system for prescribing cardiac imaging planes comprised of DCNN modules. 1) AXLocNet to localize the mitral valve (MV) and apex from the axial stack to prescribe a vertical long-axis, 2) LAXLocNet to localize the MV and apex from long-axis views to prescribe a SAX stack, 3) SAXLocNet to localize the mitral valve, tricuspid valve, and aortic valve to prescribe the 4, 3, and 2-chamber views.

    Left: Comparison of plane angulation differences from A) 4-chamber, B) 3-chamber, or C) 2-chamber planes acquired by an MRI technologist (teal) or SAXLocNet (coral).

    Right: Exemplar vertical long-axis images displaying radiologist ground truth (yellow), technologist acquired (teal), and SAXLocNet predicted (red) A) 4-chamber, B) 3-chamber, or C) 2-chamber planes. Ground truth and SAXLocNet predicted localizations are shown as dots yellow and red, respectively.

  • Automatic multilabel segmentation of large cerebral vessels from MR angiography images using deep learning
    Félix Dumais1, Marco Perez Caceres1, Noémie Arès-Bruneau2, Christian Bocti2,3,4, and Kevin Whittingstall5
    1Médecine nucléaire et radiobiologie, Université de Sherbrooke, Sherbrooke, QC, Canada, 2Faculté de Médecine et des Sciences de la Santé, Université de Sherbrooke, Sherbrooke, QC, Canada, 3Clinique de la Mémoire et Centre de Recherche sur le Vieillissement, CIUSSS de l’Estrie-CHUS, Sherbrooke, QC, Canada, 4Service de Neurologie, Département de Médecine, CHUS, Sherbrooke, QC, Canada, 5Radiologie diagnostique, Université de Sherbrooke, Sherbrooke, QC, Canada
    Neural network performances are similar to those obtained with trained annotators on large arteries. We can do a multilabel brain artery segmentation by propagating CW annotation through the arterial system. The variability of this algorithm to compute diameters is smaller than 1 voxel.
    Figure 2: Top row : a) Raw TOF, b) CW segmentation, c) Propagation of CW labels in the brain; Bottom row: 3D rendering of a full arterial segmentation; Right: Legend indicating artery labels with their corresponding color.
    Figure 3: CW from a TOF-MRA raw (a) alongside its manual annotation (b) and the neural network prediction (c)
  • Probing the Feasibility and Performance of Super-Resolution Head and Neck MRA Using Deep Machine Learning
    Ioannis Koktzoglou1,2, Rong Huang1, William J Ankenbrandt1,2, Matthew T Walker1,2, and Robert R Edelman1,3
    1Radiology, NorthShore University HealthSystem, Evanston, IL, United States, 2University of Chicago Pritzker School of Medicine, Chicago, IL, United States, 3Northwestern University Feinberg School of Medicine, Chicago, IL, United States
    DNN-based SR reconstruction of 3D tsSOS-QISS MRA of the head and neck is feasible, and potentially enables scan time reductions of 2-fold and 4-fold for portraying the intracranial and extracranial arteries, respectively.
    Figure 2. Coronal MIP 3D tsSOS-QISS MRA images showing the impact of 3D SCRC SR DNN reconstruction on image quality for 2- to 4-fold reduced of axial spatial resolution with respect to ground truth data (left-most column) and input lower resolution (LR) data (right-most upper panels). Insets show magnified views of the right middle cerebral artery. Note the markedly improved image quality and spatial resolution of the 3D SCRC SR DNN with respect to input LR volumes.
    Figure 1. Architectures of the deep neural networks used for super-resolution reconstruction. ReLU = rectified linear unit. Training batch sizes for networks (a) through (d) were 400, 80, 400 and 20, whereas the number of trainable parameters for networks were 540,073, 436,521, 185,857 and 556,801, respectively.
  • Fully automatic extraction of mitral valve annulus motion parameters on long axis CINE CMR using deep learning
    Maria Monzon1,2, Seung Su Yoon1,2, Carola Fischer2, Andreas Maier1, Jens Wetzl2, and Daniel Giese2
    1Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany, 2Magnetic Resonance, Siemens Healthcare GmbH, Erlangen, Germany
    ..
    Figure 1: Proposed CNN system. The long-axis CMR images are forwarded to the first CNN which localizes the region of interest. After cropping and rotation, the second CNN regresses the time-resolved mitral valve annulus landmarks from Gaussian heatmaps. Finally, the motion parameters are extracted.

    Figure 2: a) Feature extraction 2D Residual and 3D convolution blocks. Each residual block consists of a spatial convolution(CONV)(3x3), Batch Normalization (BN) and Leaky Rectified Linear Units (LReLU) activation layers. The 3D block consist of double spatial and temporal CONV(3x3x3)-BN-LReLU operations. b) Localization CNN architecture based on 2-D UNet with 3 encoder-decoder blocks. c) Landmark tracking Fully CNN architecture details based on 3-D UNet.For down-sampling asymmetrical max-pooling layers were applied into temporal and spatial dimensions.

  • Intracranial aneurysm segmentation using a deep convolutional neural network
    Miaoqi Zhang1, Qingchu Jin2, Mingzhu Fu1, Hanyu Wei1, and Rui Li1
    1Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, Beijing, China, 2Johns Hopkins University, Baltimore, MD, United States
     we successfully segmented IAs from dual inputs (TOF-MRA and T1-VISTA) using the hyperdense net with higher accuracy than a single input.
    Figure 2. Four IA segmentation examples. Each row represents a patient in the test set. Six columns from left to right represent TOF-MRA, T1-VISTA, ground truth (GT), segmentation from the model with dual inputs, segmentation from the model with TOF-MRA alone and segmentation from the model with T1-VISTA alone.
    Figure 3. Aneurysm segmentation evaluation across different combinations of image inputs: dual inputs, TOF-MRA alone and T1-VISTA alone. (A) Sørensen–Dice coefficient (DSC); (B) sensitivity; (C) positive predictive value (PPV); and (D) specificity. Paired Student’s t-tests were performed with the notation *: P < 0.05; **: P < 0.005; ***: P < 0.0005.
  • AI-based Computer-Aided System for Cardiovascular Disease Evaluation (AI-CASCADE) for carotid tissue quantification
    Yin Guo1, Li Chen2, Dongxiang Xu3, Rui Li4, Xihai Zhao4, Thomas S. Hatsukami5, and Chun Yuan1,3
    1Bioengineering, University of Washington, Seattle, WA, United States, 2Electrical Engineering, University of Washington, Seattle, WA, United States, 3Radiology, University of Washington, Seattle, WA, United States, 4Biomedical Engineering, Tsinghua University, Beijing, China, 5Surgery, University of Washington, Seattle, WA, United States
    In this work, we developed AI-CASCADE, a fully automated solution for quantitative tissue characterization of carotid MRI, including artery localization, vessel wall segmentation, artery registration and plaque component segmentation.
    Fig 1. Workflow of AI-CASCADE
    Fig 3. Visualization of composition segmentation. Blue-Ca, yellow-LRNC, orange-IPH.
  • Myocardial T2-weighted black-blood imaging with a deep learning constrained Compressed SENSE reconstruction
    KOHEI YUDA1, Takashige Yoshida1, Yuki Furukawa1, Masami Yoneyama2, Jihun Kwon2, Nobuo Kawauchi1, Johannes M. Peeters 3, and Marc Van Cauteren3
    1Radiology, Tokyo Metropolitan Police Hospital, nakanoku, Japan, 2Philips Japan, Tokyo, Japan, shinagwaku, Japan, 3Philips Healthcare, Best, Netherlands, Netherlands, Netherlands
    The CS-AI reduced the noise better compared to C-SENSE, and the depiction of the myocardium improved. Our results suggest that the application of CS-AI may be able to improve the image quality of myocardial T2W-BB.

    (a) C-SENSE T2W-BB (b) CS-AI-T2W-BB

    Figure 1. Representative high resolution T2W-BB images using the C-SENSE and CS-AI reconstructions.

    (a) C-SENSE strong (b) CS-AI weak (c) CS-AI medium (d) CS-AI strong

    Figure 2. High resolution T2W-BB images reconstructed by C-SENSE with denoising level = strong (a) and CS-AI with denoising level = weak, medium, and strong for (b), (c), and (d), respectively.

  • Evaluation of a Deep Learning reconstruction framework for three-dimensional cardiac imaging
    Gaspar Delso1, Marc Lebel2, Suryanarayanan Kaushik2, Graeme McKinnon2, Paz Garre3, Pere Pujol3, Daniel Lorenzatti3, José T Ortiz3, Susanna Prat3, Adelina Doltra3, Rosario J Perea3, Teresa M Caralt3, Lluis Mont3, and Marta Sitges3
    1GE Healthcare, Barcelona, Spain, 2GE Healthcare, Waukesha, WI, United States, 3Hospital Clínic de Barcelona, Barcelona, Spain
    The Deep Learning framework was found to provide equivalent diagnostic information content as state-of-the-art 3D Cartesian reconstruction, with consistently superior image quality and processing time compatible with clinical routine.
    Figure 1.- Long axis views of 3D MDE series, reconstructed with a standard 3D Cartesian method (left) and the proposed Deep Learning framework (right).
    Figure 4.- Top: Logarithmic joint histograms of the voxel-wise relative standard deviation, in the reference Cartesian and DL reconstructions shown in figure 1. Notice how most voxels are located below the identity line, indicating SNR improvement. Bottom: Line profile illustrating the preservation of structure edges with the regularized reconstruction.
  • Automated Segmentation of the Left Atrium from 3D Late Gadolinium Enhancement Imaging using Deep Learning
    Suvai Gunasekaran1, Julia Hwang1, Daming Shen1,2, Aggelos Katsaggelos1,3, Mohammed S.M. Elbaz1, Rod Passman4, and Daniel Kim1,2
    1Radiology, Northwestern University, Feinberg School of Medicine, Chicago, IL, United States, 2Biomedical Engineering, Northwestern University, Evanston, IL, United States, 3Electrical and Computer Engineering, Northwestern University, Evanston, IL, United States, 4Cardiology, Northwestern University, Feinberg School of Medicine, Chicago, IL, United States
    A proposed 2 s automated left atrial wall segmentation from 3D late gadolinium enhancement imaging using deep learning provided comparable segmentation performance to 16-30 min manual segmentation.
    Figure 3. Examples of results for the different DL networks from four testing cases. The segmentation results generated by the 3D inputs are qualitatively better than those generated by the 2D inputs.
    Figure 2. The overall process of DL segmentation for (A) 2D and (B) 3D inputs. The LA LGE and reference masks extracted from manual contours were used as input and reference to train the DL network. For testing, the LA LGE images were fed into the trained network to get the DL segmented masks.
  • Respiratory motion in DENSE MRI: Introduction of a new motion model and use of deep learning for motion correction
    Mohamad Abdi1, Daniel S Weller1,2, and Frederick H Epstein1,3
    1Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 2Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, United States, 3Radiology, University of Virginia, Charlottesville, VA, United States
    We introduce a new motion model for displacement encoding with stimulated echoes imaging and a strategy for motion compensation in segmented acquisitions. A Deep learning method is developed and shown to be an effective solution to estimate the required parameters for motion compensation.
    Diagram of an encoder-type convolutional neural network to estimate linear and constant phase corrections for motion-corrupted DENSE and it’s training using data generated with the DENSE simulator.
    Bloch-equation-based simulations show the various effects of free breathing during the acquisition of DENSE images (top row of images). Motion-compensation based on Equation 4 demonstrates the validity of the motion model and its ability to achieve motion correction if the phase correction values are known (bottom row of images).
  • Cardiac metabolism assessed by MR Spectroscopy to classify the diabetic and obese heart: a Random Forest and Bayesian network study
    Ina Hanninger1, Eylem Levelt2,3, Jennifer J Rayner2, Christopher T Rodgers2,4, Stefan Neubauer2, Vicente Grau1, Oliver J Rider2, and Ladislav Valkovic2,5
    1Oxford Institute of Biomedical Engineering, Oxford, United Kingdom, 2Radcliffe Department of Medicine, University of Oxford Centre for Clinical Magnetic Resonance Research, Oxford, United Kingdom, 3University of Leeds, Leeds, United Kingdom, 4Wolfson Brain Imaging Centre, Cambridge, United Kingdom, 5Slovak Academy of Sciences, Institute of Measurement Science, Bratislava, Slovakia
    Random Forest classifiers and Bayesian networks applied to MR spectroscopy measures suggests a high predictive impact of cardiac metabolism in classifying diabetic and obese patients, further implying a causal mechanism with visceral fat, concentric LV remodeling and energy impairment
    Figure 2(a,b,c): Bayesian networks learned through NOTEARS structure learning algorithm for each subgroup pair of the data. Each node represents a feature variable, and each directed edge encodes causal influence in the form of conditional probability dependence.
    Figure 1(a,b,c): SHAP value plots computed for each Random Forest classification, representing a rank of feature importances. The x-axis gives the SHAP value, i.e. the impact on the model output (for a positive classification). Red indicates higher feature values, while blue indicates lower values.
  • Differentiation between cardiac amyloidosis and hypertrophic cardiomyopathy by texture analysis of T2-weighted CMR imaging
    Shan Huang1, Yuan Li1, Ke Shi1, Yi Zhang1, Ying-kun Guo2, and Zhi-gang Yang1
    1Radiology, West China Hospital, Chengdu, China, 2Radiology, West China Second University Hospital, Chengdu, China
    Texture analysis was feasible and reproducible for detecting myocardial tissue alterations on T2-weighted images. Our radiomics model had a great performance in differentiating between CA and HCM patients, which was comparable to LGE.

    Figure1 Feature selection and dimension reduction process.

    ICC: intraclass correlation coefficient, LASSO: the least absolute shrinkage and selection operator.

    Figure 2 Correlogram of the relationship among the selected texture features.

    Smaller and/or lighter circles represent lower correlation. On the contrary, larger and/or darker circles indicate higher correlation. GLRLM: gray level run length matrix, GLDM: gray level dependence matrix, GLCM: gray level co-occurrence matrix; LRE: Long Run Emphasis; SRE: Short Run Emphasis; SDHGLE: Small Dependence High Gray Level Emphasis; LDE: Large Dependence Emphasis; H: high wavelet filter; L: low wavelet filter