Lauren Oakden-Rayner, PhD
Director of Research in Medical Imaging
Royal Adelaide Hospital
Senior Research Fellow
Australian Institute for Machine Learning
Title: Medical AI Safety – A Clinical Perspective
Abstract:
Medical artificial intelligence is rapidly moving into clinics, particularly in imaging-based specialties such as radiology. This transition is producing many new challenges, as the regulatory environment has struggled to keep up and AI training for healthcare workers is virtually non-existent. Dr. Oakden-Rayner will provide a clinical safety perspective on medical AI, discuss a range of identified risks and potential harms, and discuss possible solutions to mitigate these risks as this exciting field continues to develop.
Bio:
Dr. Lauren Oakden-Rayner (FRANZCR, PhD) is the Director of Research in Medical Imaging at the Royal Adelaide Hospital and is a senior research fellow at the Australian Institute for Machine Learning. Her research explores the safe translation of artificial intelligence technologies into clinical practice, both from a technical and clinical perspective.
David Magnus, PhD
Thomas A Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics, Medicine, and by courtesy of Bioengineering
Director, Stanford Center for Biomedical Ethics
Associate Dean for Research
Stanford University
Title: Ethical Challenges in the Application of AI to Healthcare
Abstract:
This presentation will focus on three issues. First, applying AI to healthcare requires access to large data sets. Data acquisition and data sharing raises a number of challenging ethical issues, including challenges to traditional understandings of informed consent, and importance of diversity and inclusion in data sources. Second, I will briefly discuss the widely discussed issues around justice and equity raised by AI in healthcare. Finally, I will discuss challenges with ethical oversight and governance, particularly in relation to research development of AI. IRB’s are prohibited from considering downstream social consequences and harms to individuals other than research participants when evaluating the harms and risks of research. This gap needs to be filled, particularly as dual uses of AI models are now recognized as a problem.
Bio:
David Magnus, PhD is Thomas A. Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics and Medicine and by Courtesy of Bioengineering at Stanford University, where he is Director of the Stanford Center for Biomedical Ethics and an Associate Dean of Research. Magnus is member of the Ethics Committee for the Stanford Hospital. He is currently the Vice-Chair of the IRB for the NIH Precision Medicine Initiative (“All of Us”). He is the former President of the Association of Bioethics Program Directors, and is the Editor in Chief of the American Journal of Bioethics. He has published articles on a wide range of topics in bioethics, including research ethics, genetics, stem cell research, organ transplantation, end of life, and patient communication. He was a member of the Secretary of Agriculture’s Advisory Committee on Biotechnology in the 21st Century and currently serves on the California Human Stem Cell Research Advisory Committee. He is the principal editor of a collection of essays entitled “Who Owns Life?” (2002) and his publications have appeared in New England Journal of Medicine, Science, Nature Biotechnology, and the British Medical Journal. He has appeared on many radio and television shows including 60 Minutes, Good Morning America, The Today Show, CBS This Morning, FOX news Sunday, and ABC World News and NPR. In addition to his scholarly work, he has published Opinion pieces in the Philadelphia Inquirer, the Chicago Tribune, the San Jose Mercury News, and the New Jersey Star Ledger.
Polina Golland, PhD
Professor of Electrical Engineering and Computer Science
PI in the Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Title: Learning to Read X-Ray: Applications to Heart Failure Monitoring
Abstract: We propose and demonstrate a novel approach to training image classification models based on large collections of images with limited labels. We take advantage of availability of radiology reports to construct joint multimodal embedding that serves as a basis for classification. We demonstrate the advantages of this approach in application to assessment of pulmonary edema severity in congestive heart failure that motivated the development of the method.
Baris Turkbey, MD, FSAR
Senior Clinician
Section Chief of MRI
Section Chief of Artificial Intelligence
Molecular Imaging Branch
National Cancer Institute, NIH
Title: Advanced Prostate Cancer Imaging
Talk Objectives:
- To discuss current status and limitations of localized prostate cancer diagnosis.
- To discuss use of artificial intelligence in diagnosis of localized prostate cancer.
- To discuss use of molecular imaging in clinical prostate cancer management.
Bio:
Dr. Turkbey obtained his medical degree from Hacettepe University in Ankara, Turkey in 2003. He completed his residency in Diagnostic and Interventional Radiology at Hacettepe University. He joined Molecular Imaging Branch (MIB), National Cancer Institute, NIH in 2007. His main research areas are imaging of prostate cancer (multiparametric MRI, PET CT), image guided biopsy and treatment techniques (focal therapy, surgery and radiation therapy) for prostate cancer and artificial intelligence. Dr. Turkbey is a member of Prostate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is the Director Magnetic Resonance Imaging section in MIB and the Artificial Intelligence Resource in MIB.
In Person at the Clark Center S360 – Lunch will be provided!
Zoom: https://stanford.zoom.us/j/99496515255?pwd=MHlXbXM2WXJULzZwemk1WjJHNFZOdz09
Anthony Gatti, PhD
Postdoctoral Research Fellow
Department of Radiology
Wu Tsai Human Performance Alliance
Stanford University
Title: Towards Understanding Knee Health Using Automated MRI-Based Statistical Shape Models
Abstract: Knee injuries and pain are prevalent across all ages, with varying causes from “anterior knee pain” in runners to osteoarthritis-related pain. Osteoarthritis pain is a particular problem because structural outcomes assessed on medical images often disagree with symptoms. Most studies trying to understand knee health and pain use simple biomarkers such as mean cartilage thickness. My talk will present an automated pipeline for quantifying the whole knee using statistical shape modeling. I will present a conventional statistical shape model as well as a novel approach that uses generative neural implicit representations. Both modeling approaches allow unsupervised identification of salient anatomic features. I will demonstrate how these features can be used to predict existing radiographic outcomes, patient demographics, and knee pain.
Liangqiong Qu, PhD
Postdoctoral Research Fellow
Department of Biomedical Data Sciences
Stanford University
Title: Distributed Deep Learning in Medical Imaging
Abstract: Distributed deep learning is an emerging research paradigm for enabling collaboratively training deep learning models without sharing patient data.
In this talk, we will first investigate the use distributed deep learning to build medical imaging classification models in a real-world collaborative setting.
We then present several strategies to tackle the data heterogeneity challenge and the lack of quality labeled data challenge in distributed deep learning.
Archana Venkataraman, PhD
Associate Professor of Electrical and Computer Engineering
Boston University
Title: Biologically Inspired Deep Learning as a New Window into Brain Dysfunction
Abstract: Deep learning has disrupted nearly every major field of study from computer vision to genomics. The unparalleled success of these models has, in many cases, been fueled by an explosion of data. Millions of labeled images, thousands of annotated ICU admissions, and hundreds of hours of transcribed speech are common standards in the literature. Clinical neuroscience is a notable holdout to this trend. It is a field of unavoidably small datasets, massive patient variability, and complex (largely unknown) phenomena. My lab tackles these challenges across a spectrum of projects, from answering foundational neuroscientific questions to translational applications of neuroimaging data to exploratory directions for probing neural circuitry. One of our key strategies is to integrate a priori information about the brain and biology into the model design.
This talk will highlight two ongoing projects that epitomize this strategy. First, I will showcase an end-to-end deep learning framework that fuses neuroimaging, genetic, and phenotypic data, while maintaining interpretability of the extracted biomarkers. We use a learnable dropout layer to extract a sparse subset of predictive imaging features and a biologically informed deep network architecture for whole-genome analysis. Specifically, the network uses hierarchical graph convolution that mimic the organization of a well-established gene ontology to track the convergence of genetic risk across biological pathways. Second, I will present a deep-generative hybrid model for epileptic seizure detection from scalp EEG. The latent variables in this model capture the spatiotemporal spread of a seizure; they are complemented by a nonparametric likelihood based on convolutional neural networks. I will also highlight our current end-to-end extensions of this work focused on seizure onset localization. Finally, I will conclude with exciting future directions for our work across the foundational, translational, and exploratory axes.
Andrew Janowczyk, PhD
Assistant Professor
Department of Biomedical Engineering
Emory University
Title: Computational Pathology: Towards Precision Medicine
Abstract:
Roughly 40% of the population will be diagnosed with some form of cancer in their lifetime. In a large majority of these cases, a definitive cancer diagnosis is only possible via histopathologic confirmation on a tissue slide. With the increasing popularity of the digitization of pathology slides, a wealth of new untapped data is now regularly being created.
Computational analysis of these routinely captured H&E slides is facilitating the creation of diagnostic tools for tasks such as disease identification and grading. Further, by identifying patterns of disease presentation across large cohorts of retrospectively analyzed patients, new insights for predicting prognosis and therapy response are possible [1,2]. Such biomarkers, derived from inexpensive histology slides, stand to improve the standard of care for all patient populations, especially where expensive genomic testing may not be readily available. Moreover, since numerous other diseases and disorders, such as oncoming clinical heart failure [3], are similarly diagnosed via pathology slides, those patients also stand to benefit from these same technological advances in the digital pathology space.
This talk will discuss our research aimed towards reaching the goal of precision medicine, wherein patients receive optimized treatment based on historical evidence. The talk discusses how the applications of deep learning in this domain are significantly improving the efficiency and robustness of these models [4]. Numerous challenges remain, though, especially in the context of quality control and annotation gathering. This talk further introduces the audience to open-source tools being developed and deployed to meet these pressing needs, including quality control (histoqc.com [5]), annotation (quickannotator.com), labeling (patchsorter.com), validation (cohortfinder.com).
Melissa McCradden, PhD
John and Melinda Thompson Director of Artificial Intelligence in Medicine
Integration Lead, AI in Medicine Initiative
Bioethicist, The Hospital for Sick Children (SickKids)
Associate Scientist, Genetics & Genome Biology
Assistant Professor, Dalla Lana School of Public Health
Title: What Makes a ‘Good’ Decision? An Empirical Bioethics Study of Using AI at the Bedside
Abstract: This presentation will identify the gap between AI accuracy and making good clinical decisions. I will present a study where we develop an ethical framework for clinical decision-making that can help clinicians meet medicolegal and ethical standards when using AI that does not rely on explainability, nor perfect accuracy of the model.
Marzyeh Ghassemi, PhD
Assistant Professor, Department of Electrical Engineering and Computer Science
Institute for Medical Engineering & Science
Massachusetts Institute of Technology (MIT)
Canadian CIFAR AI Chair at Vector Institute
Title: Designing Machine Learning Processes For Equitable Health Systems
Abstract
Dr. Marzyeh Ghassemi focuses on creating and applying machine learning to understand and improve health in ways that are robust, private and fair. Dr. Ghassemi will talk about her work trying to train models that do not learn biased rules or recommendations that harm minorities or minoritized populations. The Healthy ML group tackles the many novel technical opportunities for machine learning in health, and works to make important progress with careful application to this domain.
Hoifung Poon, PhD
General Manager at Health Futures of Microsoft Research
Affiliated Professor at the University of Washington Medical School.
Title: Advancing Health at the Speed of AI
Abstract: The dream of precision health is to develop a data-driven, continuous learning system where new health information is instantly incorporated to optimize care delivery and accelerate biomedical discovery. In reality, however, the health ecosystem is plagued by overwhelming unstructured data and unscalable manual processing. Self-supervised AI such as large language models (LLMs) can supercharge structuring of biomedical data and accelerate transformation towards precision health. In this talk, I’ll present our research progress on biomedical AI for precision health, spanning biomedical LLMs, multi-modal learning, and causal discovery. This enables us to extract knowledge from tens of millions of publications, structure real-world data for millions of cancer patients, and apply the extracted knowledge and real-world evidence to advancing precision oncology in deep partnerships with real-world stakeholders.