4248
Deep Learning-based MR-only Radiation Therapy Planning for Head&Neck and Pelvis
Florian Wiesinger1, Sandeep Kaushik1, Mathias Engström2, Mika Vogel1, Graeme McKinnon3, Maelene Lohezic1, Vanda Czipczer4, Bernadett Kolozsvári4, Borbála Deák-Karancsi4, Renáta Czabány4, Bence Gyalai4, Dorottya Hajnal4, Zsófia Karancsi4, Steven F. Petit5, Juan A. Hernandez Tamames5, Marta E. Capala5, Gerda M. Verduijn5, Jean-Paul Kleijnen5, Hazel Mccallum6, Ross Maxwell6, Jonathan J. Wyatt6, Rachel Pearson6, Katalin Hideghéty7, Emőke Borzasi7, Zsófia Együd7, Renáta Kószó7, Viktor Paczona7, Zoltán Végváry7, Suryanarayanan Kaushik3, Xinzeng Wang3, Cristina Cozzini1, and László Ruskó4
1GE Healthcare, Munich, Germany, 2GE Healthcare, Stockholm, Sweden, 3GE Healthcare, Waukesha, WI, United States, 4GE Healthcare, Budapest, Hungary, 5Erasmus MC, Rotterdam, Netherlands, 6Newcastle University, Newcastle, United Kingdom, 7University of Szeged, Szeged, Hungary
Deep Learning provides powerful tools to address unsolved problems and unmet needs of MR-only Radiation Therapy Planning (RTP) in terms of synthetic CT conversion (required for acquired dose calculation) and time-consuming organ-at-risk (OAR) delineation.
Figure 1: In-phase ZTE without (top) and with 2xFOV extension plus DL image reconstruction (2nd row) for head&neck, together with corresponding DL derived synthetic CT (3rd row) and true CT (bottom row).
Figure 3: 2D T2 PROPELLER based automated OAR segmentation in the head & neck (i.e. brain, brainstem, eyes, lens, optic nerves, chiasm, pituitary gland, cochlea, parotid glands, mandible, oral cavity, submandibular glands, larynx, spinal cord, and body contour).