SMRT Signals • November 2017 • Vol.6 Issue 3

INFORMATION FOR YOU:

Michael E. Moseley, Ph.D.
Professor of Radiology, Stanford University

Ch- Ch- Ch- Changes!

In the reality of MR, it's never safe to say that everything is hunky dory. It's all in flux; waves of ideas rush past us daily. Ever under pressure, our old-school printed journals are now on-line and interactive. So too, the SMRT Signals is morphing into a real-time, on-line, hip-hop, be-bop resource, orbiting the SMRT Facebook pages. Yeah, I still have stacks of the old copies, and yeah, it's tough to forget old friends. However, it's satisfying to know that all that knowledge and collective experience remains at a fingertip. It's not gone. Consider it a generation of wisdom and fame to pass along to absolute beginners, new members and our next-gen SMRT heroes as they avoid the scary monsters through the labyrinth of MR.

Flux is never smooth. We will look back at this year 2017 and recall that (sitting in our tin cans we call magnets) we were witnessing a new Singularity. Not since the dawn of functional neuroimaging in the early 1990's (with the swarm of BOLD and diffusion imaging concepts) had the time-line of MR been perturbed so quickly and with such a positive response. Thinking historically, MR has always been a process of perturbation and response. We "pulse" protons in certain ways and listen for echoes which reflect the protons microenvironment. Without the applied RF, there is no signal. Relaxation can only be observed from excitation. The first Singularity and the birth of our field.

Other singular Events followed. The first Oxford superconducting magnets launched clinical NMR (and later, MR, since NucMed couldn't tolerate the new competition/confusion and the public didn't need more "nuclear"), the "birdcage" RF coil, gradient-recalled echoes (GRE) and the astonishing first CINE loops seen at the RSNA, self-shielded gradients (thank you, Dr. Roemer), arrays of multi-channel RF receive coils, BOLD (one of the best acronyms ever), proton diffusion anisotropy (thank you, Safeway celery), simultaneous multislice (or Multiband), and now "Deep Learning."

How would you have felt the first ripples of the present Deep Learning disturbance? By following the SMRT's educational materials, the on-line (listserve and Facebook) resources, and the annual, regional, and chapter meetings. And by reading this.

Looking back, all the signs of the changes were apparent; the first clue of the Event Horizon that is now upon us was the rapid acceptance of MR Fingerprinting (T. Christen, M. Griswold, for example, are early pioneers). MRF was about collecting an array of images for a book of magic called The Dictionary, which contained all the possible solutions to any collection of MR images. Think it, image it, model it, classify it and then use the Dictionary to quantitatively predict any number of tissue or perfusion contrasts. Fingerprinting today is progressing rapidly and is the topic of recent workshops within the ISMRM (October, 2017 in Cleveland, for example). The key to MRF was to change as many sequence parameters as possible to build the biggest, coolest Dictionary ever. We should have seen this coming; synthetic images are real and now. In fact, it is impossible to judge if an MR image today was actually acquired or simply created from a Dictionary or trained algorithm.

While wacky wicked-fast sequences were busy adding "fingerprint" images for the all-knowing Dictionary, novel MR scan packages (MAGiC is one example) are now appearing that can synthesize images from a known array of sequence parameters. Why do this? Because image synthesis can reduce total exam times or alter tissue contrast after the patient's exam. More focused perhaps than MR Fingerprinting, this present Event has made us more comfortable with creating synthetic images. Think of this as how "portrait mode" works on our new phones; any desired image contrast can be added at any time later. Quick reality quiz: which company (later sold to Toshiba in 1989) made early (N)MR scanners which featured an image synthesis mode (that sadly, was not well received)? Ask your local SMRT Policy Board member for the answer…

Like the recent gravitational wave detected on August 14, the latest MR Singularity is here. Whispers about a disruptive "Deep Learning" began in mid-2016 at the SMRT Honolulu meeting. With millions of on-line images widely available to anyone for "training", bot-like algorithms armed with a flood of (relatively) inexpensive graphic processing units (GPU's) were consuming every data array or image available. Once trained, the bots were tested with key "ground truth" real-image data, and from that, the training resumes. Deep Learning is now creating clinical images free of noise and artifacts with huge warp-factor accelerations. Early believers were caught in the wonderment of it all; the power was intoxicating. Deep Learning seemingly has no bounds.

Then the scary realization hit home by late-2017; DL algorithms and workflows were performing better than anyone had planned; the algorithms should not have been this good. No one knew why. The storm of DL papers and reports began; 3T images were being used to produce 7T images, 1.5T to 3T, MR scans could predict PET scans, and so on. Crazy stuff. Highly detailed DTI tractography was possible from only a few diffusion directions. The Horizon expanded; low-dose PET, CT, and MR improvements became everyday routine. Start-ups were spun-off like new galaxies, all from formative hot ideas. MR was rapidly drifting off-course into unknown deep learning dimensions. And it is still accelerating.

By the ISMRM and SMRT Paris meetings in mid 2018, entire sessions were needed to cover the expanding Horizon of DL, which quickly accounted for over 25% of all posters and presentations. By the time the Paris meeting was over, Deep Learning had infected nearly all forms of MR, not just image reconstruction. DL was combining with Fingerprinting. The Dictionary absorbed it all, every pulse sequence, every echo, all of k-space was learnable.

MR physicists were quickly becoming irrelevant, as heralded by the jaw-dropping AUTOMAP function (Zhu, Liu, Rosen, et al., "Image reconstruction by domain transform manifold learning", arXiv.org). AUTOMAP fed on everything in the vast ImageNet database (image-net.org). Images of puppies, cats, faces, buildings, and even tomographic images of all kinds became training fodder. Soon, AUTOMAP didn't need a particular MR pulse sequence to run, it didn't need Fourier transforms, it understood complex image phase data from multi-coil arrays, and could even operate on any imaging modality such as CT or PET. AUTOMAP even spawned its own language of "hyperparameters", "low dimensional manifolds", and the "corpus of training data".

Nothing could escape Deep Learning. MR vendors, in attempts to harness the force of the "Big Data" AI, charted product roadmaps through Paris that offered autonomous accelerated scanning with noise and motion immunity, automated anatomy and pathology detection, which when coupled with a smart PACS patient planning resource, made human interaction almost an afterthought. Amusing at first but then Pennywise-scary, patients and hospitals were being offered cheap access to Deep Learning ("For a dollar, an AI will examine your medical scan" Engadget, 10/27/17). Companies like Zebra Medical Imaging (www.zebra.com) joined data-mining giants such as Google, Facebook, and Microsoft to apply DL to medical imaging. It's beyond MR; it's everywhere and it's watching. And learning.

I'm still puzzled, though. Like the "black monolith" found on the moon from 2001: A Space Odyssey, is Deep Learning teaching us how to benevolently leap MR-reconstruction barriers or warning us of unknown limits? Embrace or fear? Unmistakably, MR is changing. The old protocols will be soon gone. The old "workstation" and all the old-school "functional tools" are soon dead. Swept away.

During all of this however, what happened to the MR Technologists and Radiographers? Luckily, the SMRT had been made aware of the onset of the Event in the dying last breaths of Signals. Technologists and Radiographers knew that as all-knowing as the MR machinery and image-bots had become, AUTOMAP and all the divergent vendor roadmaps could not interact with patients. Safety is not automatic. The SMRT is trained to provide the critical service of the patient experience; the bots cannot. The 2017-2018 theme of the SMRT's human-to-human 'World of Knowledge' became known as the Empowerment.

Like Major Tom, we will be soon floating in an empty MR space littered with the changes in sequence automation, big intelligence, and a staggering explosion of GPU-cycles. The foundation of our Society is not lost however; the many years of MR wisdom and patient experiences remain safe within the 'World of Knowledge.' Remember the Signals and the Empowerment within.

 

 
 
Click to go back
Signals is a publication produced by the International Society for Magnetic Resonance in Medicine for the benefit of the SMRT membership and those individuals and organizations that support the educational programs and professional advancement of the SMRT and its members. The newsletter is the compilation of editor, Julie Strandt-Peay, BSM, RT (R)(MR) FSMRT, the leadership of the SMRT and the staff in the ISMRM Central Office with contributions from members and invited participants.
Society for MR Radiographers & Technologists
A Section of the ISMRM
2300 Clayton Road, Suite 620
Concord, CA, 94520 USA
Tel: +1 925-825-SMRT (7678)
Fax: +1 510-841-2340
smrt@ismrm.org