Join us for a panel on Behavioral XR on Thursday, June 3rd from 9:00 – 10:30 am PDT. The event will start with a one-hour panel discussion featuring Dr. Elizabeth McMahon, a psychologist with a private practice in California; Sarah Hill of Healium, a company developing XR apps for mental fitness based in Missouri; Christian Angern of Sympatient, a company developing VR for anxiety therapy based in Germany; and Marguerite Manteau-Rao of Penumbra, a medical device company based in California. This panel will be moderated by Dr. Walter Greenleaf of Stanford’s Virtual Human Interaction Lab (VHIL) and Dr. Christoph Leuze of the Stanford Medical Mixed Reality (SMMR) program. Immediately following the panel discussion, you are also invited to a 30-minute interactive session with the panelists where questions and ideas can be explored in real time.
Register here to save your place now! After registering, you will receive a confirmation email containing information about joining the meeting.
Please visit this page to subscribe to our events mailing list.
Sponsored by Stanford Medical Mixed Reality (SMMR)
Radiology Department-Wide Research Meeting
• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging
Location: Zoom – Details can be found here: https://radresearch.stanford.edu
Meetings will be the 3rd Friday of each month.
Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology
Stanford AIMI Director Curt Langlotz and Co-Directors Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on the latest, best research on the role of AI in diagnostic excellence across medicine, current areas of impact, fairness and societal impact, and translation and clinical implementation. The program includes talks, interactive panel discussions, and breakout sessions. Registration is free and open to all.
Also, the 2nd Annual BiOethics, the Law, and Data-sharing: AI in Radiology (BOLD-AIR) Summit will be held on August 4, in conjunction with the AIMI Symposium. The summit will convene a broad range of speakers in bioethics, law, regulation, industry groups, and patient safety and data privacy, to address the latest ethical, regulatory, and legal challenges regarding AI in radiology.
Regina Barzilay, PhD
School of Engineering Distinguished Professor for AI and Health
Electrical Engineering and Computer Science Department
AI Faculty Lead at Jameel Clinic for Machine Learning in Health
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
In this talk, I will present methods for future cancer risk from medical images. The discussion will explore alternative ways to formulate the risk assessment task and focus on algorithmic issues in developing such models. I will also discuss our experience in translating these algorithms into clinical practice in hospitals around the world.
Self-Supervision for Learning from the Bottom Up
Why do self-supervised learning? A common answer is: “because data labeling is expensive.” In this talk, I will argue that there are other, perhaps more fundamental reasons for working on self-supervision. First, it should allow us to get away from the tyranny of top-down semantic categorization and force meaningful associations to emerge naturally from the raw sensor data in a bottom-up fashion. Second, it should allow us to ditch fixed datasets and enable continuous, online learning, which is a much more natural setting for real-world agents. Third, and most intriguingly, there is hope that it might be possible to force a self-supervised task curriculum to emerge from first principles, even in the absence of a pre-defined downstream task or goal, similar to evolution. In this talk, I will touch upon these themes to argue that, far from running its course, research in self-supervised learning is only just beginning.
Saeed Hassanpour, PhD
Associate Professor of Biomedical Data Science
Associate Professor of Epidemiology
Associate Professor of Computer Science
Dartmouth Geisel School of Medicine
Deep Learning for Histology Images Analysis
With the recent expansions of whole-slide digital scanning, archiving, and high-throughput tissue banks, the field of digital pathology is primed to benefit significantly from deep learning technology. This talk will cover several applications of deep learning for characterizing histopathological patterns on high-resolution microscopy images for cancerous and precancerous lesions. Furthermore, the current challenges for building deep learning models for pathology image analysis will be discussed and new methodological advances to address these bottlenecks will be presented.
Dr. Saeed Hassanpour is an Associate Professor in the Departments of Biomedical Data Science, Computer Science, and Epidemiology at Dartmouth College. His research is focused on machine learning and multimodal data analysis for precision health. Dr. Hassanpour has led multiple NIH-funded research projects, which resulted in novel machine learning and deep learning models for medical image analysis and clinical text mining to improve diagnosis, prognosis, and personalized therapies. Before joining Dartmouth, he worked as a Research Engineer at Microsoft. Dr. Hassanpour received his Ph.D. in Electrical Engineering with a minor in Biomedical Informatics from Stanford University and completed his postdoctoral training at Stanford Center for Artificial Intelligence in Medicine & Imaging.
Indrani Bhattacharya, PhD
Postdoctoral Research Fellow
Department of Radiology
Title: Multimodal Data Fusion for Selective Identification of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging
Abstract: Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. This talk will cover multimodal and multi-scale fusion approaches to integrate radiology images, pathology images, and clinical domain knowledge about prostate cancer distribution to selectively identify and localize aggressive and indolent cancers on prostate MRI.
Rogier van der Sluijs, PhD
Postdoctoral Research Fellow
Department of Radiology
Title: Pretraining Neural Networks for Medical AI
Abstract: Transfer learning has quickly become standard practice for deep learning on medical images. Typically, practitioners repurpose existing neural networks and their corresponding weights to bootstrap model development. This talk will cover several methods to pretrain neural networks for medical tasks. The current challenges for pretraining neural networks in Radiology will be discussed and recent advancements that address these bottlenecks will be highlighted.
Nina Kottler, MD, MS
Associate Chief Medical Officer, Clinical AI
VP Clinical Operations
We have a call to action in healthcare – we need to drive value. Artificial intelligence (AI), if deployed correctly, can help accomplish this lofty mission. In this discussion we will review the following lessons learned in deploying radiology AI at scale: 4 unexpected benefits of implementing AI emergent finding triage; the importance of investing in AI radiologist education; how “most” AI needs to be incorporated into the radiologist workflow; why a platform is required to deploy AI at scale and what a modern platform looks like; how to use AI to add value to your data; and, as Dr. Curt Langlotz famously said, why rads (practices) who use AI will replace those who don’t (a depiction of what the role of the radiologist might look like in a tech enabled future).
Dr. Kottler has been a practicing radiologist specializing in emergency imaging for over 16 years. Combining her clinical experience with a graduate degree in applied mathematics, she has been using technological innovation to drive value in radiology. As the first radiologist to join Radiology Partners, Dr. Kottler has held multiple leadership positions within her practice and is currently the associate Chief Medical Officer for Clinical AI. Externally Dr. Kottler serves on multiple committees for the ACR, RSNA, and SIIM. Dr. Kottler is also passionate about promoting diversity and creating a culture of belonging. As such she is a member of the AAWR, is a member of the diversity and inclusion committee at SIIM, serves on the steering committee for RAD=, and leads the education and development division of the Belonging Committee within Radiology Partners.
Spyridon (Spyros) Bakas, PhD
Assistant Professor in the Department of Pathology,
Laboratory Medicine, and of Radiology
Center for Biomedical Image Computing and Analytics (CBICA)
Perelman School of Medicine
University of Pennsylvania
Title: Imaging Analytics for Neuro-Oncology:
Towards Computational Diagnostics
Abstract: Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI), assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This talk will touch upon example studies on this space, as well as an overview of the largest to-date real-world federated learning study to detect brain tumor boundaries.
Harini Veeraraghavan, PhD
Associate Attending Computer Scientist
Department of Medical Physics
Memorial Sloan-Kettering Cancer Center
Using AI for Longitudinal Tumor Response Monitoring and AI Guided Cancer Treatments: From Lab to Clinic
Cancer patients are imaged with multiple imaging modalities as part of routine cancer care. However, the rich information available from the images are not fully exploited to better manage patient care through earlier intervention as well as more precise targeted treatments. In this talk, I will present some of the new AI methodologies we have been developing to track tumor response as well as from routinely acquired imaging applied to image-guided radiation treatments using CT/cone-beam CT as well as MRI-guided precision treatments. I will also present some demonstration studies of how AI based automated segmentation and tumor as well as healthy tissue change assessment can be used to early detect treatment toxicities to enable clinicians to better manage cancer care. Finally, I will show how these developed methods have been put to routine clinical care for automating radiotherapy treatment planning at MSK.