Harini Veeraraghavan, PhD
Associate Attending Computer Scientist
Department of Medical Physics
Memorial Sloan-Kettering Cancer Center
Using AI for Longitudinal Tumor Response Monitoring and AI Guided Cancer Treatments: From Lab to Clinic
Cancer patients are imaged with multiple imaging modalities as part of routine cancer care. However, the rich information available from the images are not fully exploited to better manage patient care through earlier intervention as well as more precise targeted treatments. In this talk, I will present some of the new AI methodologies we have been developing to track tumor response as well as from routinely acquired imaging applied to image-guided radiation treatments using CT/cone-beam CT as well as MRI-guided precision treatments. I will also present some demonstration studies of how AI based automated segmentation and tumor as well as healthy tissue change assessment can be used to early detect treatment toxicities to enable clinicians to better manage cancer care. Finally, I will show how these developed methods have been put to routine clinical care for automating radiotherapy treatment planning at MSK.
Spyridon (Spyros) Bakas, PhD
Assistant Professor in the Department of Pathology,
Laboratory Medicine, and of Radiology
Center for Biomedical Image Computing and Analytics (CBICA)
Perelman School of Medicine
University of Pennsylvania
Title: Imaging Analytics for Neuro-Oncology: Towards Computational Diagnostics
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI), assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This talk will touch upon example studies on this space, as well as an overview of the largest to-date real-world federated learning study to detect brain tumor boundaries.
Daniel Marcus, PhD
Professor of Radiology
Director of the Neuroinformatics Research Group
Director of the Neuroimaging Informatics and Analysis Center
Developing and deploying computational tools for neuro-oncology applications includes a sequence of complex steps to identify appropriate images, assess image quality, annotate, process and other prepare and manipulate data for analysis. We have implemented services and tools on the open source XNAT informatics platform to automate much of this workflow to improve both its efficiency and effectiveness. Dr. Marcus will discuss this automated workflow and its implementation in a number of data sets and applications at Washington University.
Lena Maier-Hein, PhD
Head of Department, Computer Assisted Medical Interventions
Managing Director, Data Science and Digital Oncology
Managing Director, National Center for Tumor Diseases
German Cancer Research Center
Title: Missing the (Bench)mark?
Machine learning has begun to revolutionize almost all areas of health research. Success stories cover a wide variety of application fields ranging from radiology and gastroenterology all the way to mental health. Strikingly, however, solutions that perform favorably in research generally do not translate well to clinical practice, and little attention is being given to learning from failures. Focusing on biomedical image analysis as a key area of health-related machine learning, this talk will present pitfalls, caveats and recommendations related to machine learning-based biomedical image analysis. As a particular highlight, it will cover yet unpublished work on two key research questions related to biomedical image analysis competitions: 1) How can we best select performance metrics according to the characteristics of the driving biomedical question? And 2) Why is the winner the best? The results have been compiled based on the input of hundreds of image analysis researchers worldwide.
Lauren Oakden-Rayner, PhD
Director of Research in Medical Imaging
Royal Adelaide Hospital
Senior Research Fellow
Australian Institute for Machine Learning
Title: Medical AI Safety – A Clinical Perspective
Medical artificial intelligence is rapidly moving into clinics, particularly in imaging-based specialties such as radiology. This transition is producing many new challenges, as the regulatory environment has struggled to keep up and AI training for healthcare workers is virtually non-existent. Dr. Oakden-Rayner will provide a clinical safety perspective on medical AI, discuss a range of identified risks and potential harms, and discuss possible solutions to mitigate these risks as this exciting field continues to develop.
Dr. Lauren Oakden-Rayner (FRANZCR, PhD) is the Director of Research in Medical Imaging at the Royal Adelaide Hospital and is a senior research fellow at the Australian Institute for Machine Learning. Her research explores the safe translation of artificial intelligence technologies into clinical practice, both from a technical and clinical perspective.
David Magnus, PhD
Thomas A Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics, Medicine, and by courtesy of Bioengineering
Director, Stanford Center for Biomedical Ethics
Associate Dean for Research
Title: Ethical Challenges in the Application of AI to Healthcare
This presentation will focus on three issues. First, applying AI to healthcare requires access to large data sets. Data acquisition and data sharing raises a number of challenging ethical issues, including challenges to traditional understandings of informed consent, and importance of diversity and inclusion in data sources. Second, I will briefly discuss the widely discussed issues around justice and equity raised by AI in healthcare. Finally, I will discuss challenges with ethical oversight and governance, particularly in relation to research development of AI. IRB’s are prohibited from considering downstream social consequences and harms to individuals other than research participants when evaluating the harms and risks of research. This gap needs to be filled, particularly as dual uses of AI models are now recognized as a problem.
David Magnus, PhD is Thomas A. Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics and Medicine and by Courtesy of Bioengineering at Stanford University, where he is Director of the Stanford Center for Biomedical Ethics and an Associate Dean of Research. Magnus is member of the Ethics Committee for the Stanford Hospital. He is currently the Vice-Chair of the IRB for the NIH Precision Medicine Initiative (“All of Us”). He is the former President of the Association of Bioethics Program Directors, and is the Editor in Chief of the American Journal of Bioethics. He has published articles on a wide range of topics in bioethics, including research ethics, genetics, stem cell research, organ transplantation, end of life, and patient communication. He was a member of the Secretary of Agriculture’s Advisory Committee on Biotechnology in the 21st Century and currently serves on the California Human Stem Cell Research Advisory Committee. He is the principal editor of a collection of essays entitled “Who Owns Life?” (2002) and his publications have appeared in New England Journal of Medicine, Science, Nature Biotechnology, and the British Medical Journal. He has appeared on many radio and television shows including 60 Minutes, Good Morning America, The Today Show, CBS This Morning, FOX news Sunday, and ABC World News and NPR. In addition to his scholarly work, he has published Opinion pieces in the Philadelphia Inquirer, the Chicago Tribune, the San Jose Mercury News, and the New Jersey Star Ledger.
Polina Golland, PhD
Professor of Electrical Engineering and Computer Science
PI in the Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Title: Learning to Read X-Ray: Applications to Heart Failure Monitoring
Abstract: We propose and demonstrate a novel approach to training image classification models based on large collections of images with limited labels. We take advantage of availability of radiology reports to construct joint multimodal embedding that serves as a basis for classification. We demonstrate the advantages of this approach in application to assessment of pulmonary edema severity in congestive heart failure that motivated the development of the method.
Baris Turkbey, MD, FSAR
Section Chief of MRI
Section Chief of Artificial Intelligence
Molecular Imaging Branch
National Cancer Institute, NIH
Title: Advanced Prostate Cancer Imaging
- To discuss current status and limitations of localized prostate cancer diagnosis.
- To discuss use of artificial intelligence in diagnosis of localized prostate cancer.
- To discuss use of molecular imaging in clinical prostate cancer management.
Dr. Turkbey obtained his medical degree from Hacettepe University in Ankara, Turkey in 2003. He completed his residency in Diagnostic and Interventional Radiology at Hacettepe University. He joined Molecular Imaging Branch (MIB), National Cancer Institute, NIH in 2007. His main research areas are imaging of prostate cancer (multiparametric MRI, PET CT), image guided biopsy and treatment techniques (focal therapy, surgery and radiation therapy) for prostate cancer and artificial intelligence. Dr. Turkbey is a member of Prostate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is the Director Magnetic Resonance Imaging section in MIB and the Artificial Intelligence Resource in MIB.
In Person at the Clark Center S360 – Lunch will be provided!
Anthony Gatti, PhD
Postdoctoral Research Fellow
Department of Radiology
Wu Tsai Human Performance Alliance
Title: Towards Understanding Knee Health Using Automated MRI-Based Statistical Shape Models
Abstract: Knee injuries and pain are prevalent across all ages, with varying causes from “anterior knee pain” in runners to osteoarthritis-related pain. Osteoarthritis pain is a particular problem because structural outcomes assessed on medical images often disagree with symptoms. Most studies trying to understand knee health and pain use simple biomarkers such as mean cartilage thickness. My talk will present an automated pipeline for quantifying the whole knee using statistical shape modeling. I will present a conventional statistical shape model as well as a novel approach that uses generative neural implicit representations. Both modeling approaches allow unsupervised identification of salient anatomic features. I will demonstrate how these features can be used to predict existing radiographic outcomes, patient demographics, and knee pain.
Liangqiong Qu, PhD
Postdoctoral Research Fellow
Department of Biomedical Data Sciences
Title: Distributed Deep Learning in Medical Imaging
Abstract: Distributed deep learning is an emerging research paradigm for enabling collaboratively training deep learning models without sharing patient data.
In this talk, we will first investigate the use distributed deep learning to build medical imaging classification models in a real-world collaborative setting.
We then present several strategies to tackle the data heterogeneity challenge and the lack of quality labeled data challenge in distributed deep learning.
Archana Venkataraman, PhD
Associate Professor of Electrical and Computer Engineering
Title: Biologically Inspired Deep Learning as a New Window into Brain Dysfunction
Abstract: Deep learning has disrupted nearly every major field of study from computer vision to genomics. The unparalleled success of these models has, in many cases, been fueled by an explosion of data. Millions of labeled images, thousands of annotated ICU admissions, and hundreds of hours of transcribed speech are common standards in the literature. Clinical neuroscience is a notable holdout to this trend. It is a field of unavoidably small datasets, massive patient variability, and complex (largely unknown) phenomena. My lab tackles these challenges across a spectrum of projects, from answering foundational neuroscientific questions to translational applications of neuroimaging data to exploratory directions for probing neural circuitry. One of our key strategies is to integrate a priori information about the brain and biology into the model design.
This talk will highlight two ongoing projects that epitomize this strategy. First, I will showcase an end-to-end deep learning framework that fuses neuroimaging, genetic, and phenotypic data, while maintaining interpretability of the extracted biomarkers. We use a learnable dropout layer to extract a sparse subset of predictive imaging features and a biologically informed deep network architecture for whole-genome analysis. Specifically, the network uses hierarchical graph convolution that mimic the organization of a well-established gene ontology to track the convergence of genetic risk across biological pathways. Second, I will present a deep-generative hybrid model for epileptic seizure detection from scalp EEG. The latent variables in this model capture the spatiotemporal spread of a seizure; they are complemented by a nonparametric likelihood based on convolutional neural networks. I will also highlight our current end-to-end extensions of this work focused on seizure onset localization. Finally, I will conclude with exciting future directions for our work across the foundational, translational, and exploratory axes.