Baris Turkbey, MD, FSAR
Section Chief of MRI
Section Chief of Artificial Intelligence
Molecular Imaging Branch
National Cancer Institute, NIH
Title: Advanced Prostate Cancer Imaging
- To discuss current status and limitations of localized prostate cancer diagnosis.
- To discuss use of artificial intelligence in diagnosis of localized prostate cancer.
- To discuss use of molecular imaging in clinical prostate cancer management.
Dr. Turkbey obtained his medical degree from Hacettepe University in Ankara, Turkey in 2003. He completed his residency in Diagnostic and Interventional Radiology at Hacettepe University. He joined Molecular Imaging Branch (MIB), National Cancer Institute, NIH in 2007. His main research areas are imaging of prostate cancer (multiparametric MRI, PET CT), image guided biopsy and treatment techniques (focal therapy, surgery and radiation therapy) for prostate cancer and artificial intelligence. Dr. Turkbey is a member of Prostate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is the Director Magnetic Resonance Imaging section in MIB and the Artificial Intelligence Resource in MIB.
In Person at the Clark Center S360 – Lunch will be provided!
Anthony Gatti, PhD
Postdoctoral Research Fellow
Department of Radiology
Wu Tsai Human Performance Alliance
Title: Towards Understanding Knee Health Using Automated MRI-Based Statistical Shape Models
Abstract: Knee injuries and pain are prevalent across all ages, with varying causes from “anterior knee pain” in runners to osteoarthritis-related pain. Osteoarthritis pain is a particular problem because structural outcomes assessed on medical images often disagree with symptoms. Most studies trying to understand knee health and pain use simple biomarkers such as mean cartilage thickness. My talk will present an automated pipeline for quantifying the whole knee using statistical shape modeling. I will present a conventional statistical shape model as well as a novel approach that uses generative neural implicit representations. Both modeling approaches allow unsupervised identification of salient anatomic features. I will demonstrate how these features can be used to predict existing radiographic outcomes, patient demographics, and knee pain.
Liangqiong Qu, PhD
Postdoctoral Research Fellow
Department of Biomedical Data Sciences
Title: Distributed Deep Learning in Medical Imaging
Abstract: Distributed deep learning is an emerging research paradigm for enabling collaboratively training deep learning models without sharing patient data.
In this talk, we will first investigate the use distributed deep learning to build medical imaging classification models in a real-world collaborative setting.
We then present several strategies to tackle the data heterogeneity challenge and the lack of quality labeled data challenge in distributed deep learning.
Archana Venkataraman, PhD
Associate Professor of Electrical and Computer Engineering
Title: Biologically Inspired Deep Learning as a New Window into Brain Dysfunction
Abstract: Deep learning has disrupted nearly every major field of study from computer vision to genomics. The unparalleled success of these models has, in many cases, been fueled by an explosion of data. Millions of labeled images, thousands of annotated ICU admissions, and hundreds of hours of transcribed speech are common standards in the literature. Clinical neuroscience is a notable holdout to this trend. It is a field of unavoidably small datasets, massive patient variability, and complex (largely unknown) phenomena. My lab tackles these challenges across a spectrum of projects, from answering foundational neuroscientific questions to translational applications of neuroimaging data to exploratory directions for probing neural circuitry. One of our key strategies is to integrate a priori information about the brain and biology into the model design.
This talk will highlight two ongoing projects that epitomize this strategy. First, I will showcase an end-to-end deep learning framework that fuses neuroimaging, genetic, and phenotypic data, while maintaining interpretability of the extracted biomarkers. We use a learnable dropout layer to extract a sparse subset of predictive imaging features and a biologically informed deep network architecture for whole-genome analysis. Specifically, the network uses hierarchical graph convolution that mimic the organization of a well-established gene ontology to track the convergence of genetic risk across biological pathways. Second, I will present a deep-generative hybrid model for epileptic seizure detection from scalp EEG. The latent variables in this model capture the spatiotemporal spread of a seizure; they are complemented by a nonparametric likelihood based on convolutional neural networks. I will also highlight our current end-to-end extensions of this work focused on seizure onset localization. Finally, I will conclude with exciting future directions for our work across the foundational, translational, and exploratory axes.