Lauren Oakden-Rayner, PhD
Director of Research in Medical Imaging
Royal Adelaide Hospital
Senior Research Fellow
Australian Institute for Machine Learning
Title: Medical AI Safety – A Clinical Perspective
Medical artificial intelligence is rapidly moving into clinics, particularly in imaging-based specialties such as radiology. This transition is producing many new challenges, as the regulatory environment has struggled to keep up and AI training for healthcare workers is virtually non-existent. Dr. Oakden-Rayner will provide a clinical safety perspective on medical AI, discuss a range of identified risks and potential harms, and discuss possible solutions to mitigate these risks as this exciting field continues to develop.
Dr. Lauren Oakden-Rayner (FRANZCR, PhD) is the Director of Research in Medical Imaging at the Royal Adelaide Hospital and is a senior research fellow at the Australian Institute for Machine Learning. Her research explores the safe translation of artificial intelligence technologies into clinical practice, both from a technical and clinical perspective.
David Magnus, PhD
Thomas A Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics, Medicine, and by courtesy of Bioengineering
Director, Stanford Center for Biomedical Ethics
Associate Dean for Research
Title: Ethical Challenges in the Application of AI to Healthcare
This presentation will focus on three issues. First, applying AI to healthcare requires access to large data sets. Data acquisition and data sharing raises a number of challenging ethical issues, including challenges to traditional understandings of informed consent, and importance of diversity and inclusion in data sources. Second, I will briefly discuss the widely discussed issues around justice and equity raised by AI in healthcare. Finally, I will discuss challenges with ethical oversight and governance, particularly in relation to research development of AI. IRB’s are prohibited from considering downstream social consequences and harms to individuals other than research participants when evaluating the harms and risks of research. This gap needs to be filled, particularly as dual uses of AI models are now recognized as a problem.
David Magnus, PhD is Thomas A. Raffin Professor of Medicine and Biomedical Ethics and Professor of Pediatrics and Medicine and by Courtesy of Bioengineering at Stanford University, where he is Director of the Stanford Center for Biomedical Ethics and an Associate Dean of Research. Magnus is member of the Ethics Committee for the Stanford Hospital. He is currently the Vice-Chair of the IRB for the NIH Precision Medicine Initiative (“All of Us”). He is the former President of the Association of Bioethics Program Directors, and is the Editor in Chief of the American Journal of Bioethics. He has published articles on a wide range of topics in bioethics, including research ethics, genetics, stem cell research, organ transplantation, end of life, and patient communication. He was a member of the Secretary of Agriculture’s Advisory Committee on Biotechnology in the 21st Century and currently serves on the California Human Stem Cell Research Advisory Committee. He is the principal editor of a collection of essays entitled “Who Owns Life?” (2002) and his publications have appeared in New England Journal of Medicine, Science, Nature Biotechnology, and the British Medical Journal. He has appeared on many radio and television shows including 60 Minutes, Good Morning America, The Today Show, CBS This Morning, FOX news Sunday, and ABC World News and NPR. In addition to his scholarly work, he has published Opinion pieces in the Philadelphia Inquirer, the Chicago Tribune, the San Jose Mercury News, and the New Jersey Star Ledger.
Polina Golland, PhD
Professor of Electrical Engineering and Computer Science
PI in the Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Title: Learning to Read X-Ray: Applications to Heart Failure Monitoring
Abstract: We propose and demonstrate a novel approach to training image classification models based on large collections of images with limited labels. We take advantage of availability of radiology reports to construct joint multimodal embedding that serves as a basis for classification. We demonstrate the advantages of this approach in application to assessment of pulmonary edema severity in congestive heart failure that motivated the development of the method.
Baris Turkbey, MD, FSAR
Section Chief of MRI
Section Chief of Artificial Intelligence
Molecular Imaging Branch
National Cancer Institute, NIH
Title: Advanced Prostate Cancer Imaging
- To discuss current status and limitations of localized prostate cancer diagnosis.
- To discuss use of artificial intelligence in diagnosis of localized prostate cancer.
- To discuss use of molecular imaging in clinical prostate cancer management.
Dr. Turkbey obtained his medical degree from Hacettepe University in Ankara, Turkey in 2003. He completed his residency in Diagnostic and Interventional Radiology at Hacettepe University. He joined Molecular Imaging Branch (MIB), National Cancer Institute, NIH in 2007. His main research areas are imaging of prostate cancer (multiparametric MRI, PET CT), image guided biopsy and treatment techniques (focal therapy, surgery and radiation therapy) for prostate cancer and artificial intelligence. Dr. Turkbey is a member of Prostate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is the Director Magnetic Resonance Imaging section in MIB and the Artificial Intelligence Resource in MIB.