IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link
IBIIS & AIMI Seminar – Judy Gichoya, MD
Sep 16 @ 12:00 pm – 1:00 pm Zoom - See Description for Zoom Link
IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link

Judy Gichoya, MD
Assistant Professor
Emory University School of Medicine

Measuring Learning Gains in Man-Machine Assemblage When Augmenting Radiology Work with Artificial Intelligence

The work setting of the future presents an opportunity for human-technology partnerships, where a harmonious connection between human-technology produces unprecedented productivity gains. A conundrum at this human-technology frontier remains – will humans be augmented by technology or will technology be augmented by humans? We present our work on overcoming the conundrum of human and machine as separate entities and instead, treats them as an assemblage. As groundwork for the harmonious human-technology connection, this assemblage needs to learn to fit synergistically. This learning is called assemblage learning and it will be important for Artificial Intelligence (AI) applications in health care, where diagnostic and treatment decisions augmented by AI will have a direct and significant impact on patient care and outcomes. We describe how learning can be shared between assemblages, such that collective swarms of connected assemblages can be created. Our work is to demonstrate a symbiotic learning assemblage, such that envisioned productivity gains from AI can be achieved without loss of human jobs.

Specifically, we are evaluating the following research questions: Q1: How to develop assemblages, such that human-technology partnerships produce a “good fit” for visually based cognition-oriented tasks in radiology? Q2: What level of training should pre-exist in the individual human (radiologist) and independent machine learning model for human-technology partnerships to thrive? Q3: Which aspects and to what extent does an assemblage learning approach lead to reduced errors, improved accuracy, faster turn-around times, reduced fatigue, improved self-efficacy, and resilience?


SCIT Quarterly Seminar @ See description for ZOOM link
SCIT Quarterly Seminar
Oct 21 @ 10:00 am – 11:00 am See description for ZOOM link


“High Resolution Breast Diffusion Weighted Imaging”
Jessica McKay, PhD

ABSTRACT: Diffusion-weighted imaging (DWI) is a quantitative MRI method that measures the apparent diffusion coefficient (ADC) of water molecules, which reflects cell density and serves as an indication of malignancy. Unfortunately, however, the clinical value of DWI is severely limited by the undesirable features in images that common clinical methods produce, including large geometric distortions, ghosting and chemical shift artifacts, and insufficient spatial resolution. Thus, in order to exploit information encoded in diffusion characteristics and fully assess the clinical value of ADC measurements, it is first imperative to achieve technical advancements of DWI.

In this talk, I will largely focus on the background of breast DWI, providing the clinical motivation for this work and explaining the current standard in breast DWI and alternatives proposed throughout the literature. I will also present my PhD dissertation work in which a novel strategy for high resolution breast DWI was developed. The purpose of this work is to improve DWI methods for breast imaging at 3 Tesla to robustly provide diffusion-weighted images and ADC maps with anatomical quality and resolution. This project has two major parts: Nyquist ghost correction and the use of simultaneous multislice imaging (SMS) to achieve high resolution. Exploratory work was completed to characterize the Nyquist ghost in breast DWI, showing that, although the ghost is mostly linear, the three-line navigator is unreliable, especially in the presence of fat. A novel referenceless ghost correction, Ghost/Object minimization was developed that reduced the ghost in standard SE-EPI and advanced SMS. An advanced SMS method with axial reformatting (AR) is presented for high resolution breast DWI. In a reader study, AR-SMS was preferred by three breast radiologists compared to the standard SE-EPI and readout-segmented-EPI.

“Machine-learning Approach to Differentiation of Benign and Malignant Peripheral Nerve Sheath Tumors: A Multicenter Study”

Michael Zhang, MD

ABSTRACT: Clinicoradiologic differentiation between benign and malignant peripheral nerve sheath tumors (PNSTs) is a diagnostic challenge with important management implications. We sought to develop a radiomics classifier based on 900 features extracted from gadolinium-enhanced, T1-weighted MRI, using the Quantitative Imaging Feature Pipeline and the PyRadiomics package. Additional patient-specific clinical variables were recorded. A radiomic signature was derived from least absolute shrinkage and selection operator, followed by gradient boost machine learning. A training and test set were selected randomly in a 70:30 ratio. We further evaluated the performance of radiomics-based classifier models against human readers of varying medical-training backgrounds. Following image pre-processing, 95 malignant and 171 benign PNSTs were available. The final classifier included 21 features and achieved a sensitivity 0.676, specificity 0.882, and area under the curve (AUC) 0.845. Collectively, human readers achieved sensitivity 0.684, specificity 0.742, and AUC 0.704. We concluded that radiomics using routine gadolinium enhanced, T1-weighted MRI sequences and clinical features can aid in the evaluation of PNSTs, particularly by increasing specificity for diagnosing malignancy. Further improvement may be achieved with incorporation of additional imaging sequences.

IBIIS & AIMI Seminar: Deep Tomographic Imaging @ Zoom:
IBIIS & AIMI Seminar: Deep Tomographic Imaging
Nov 18 @ 12:00 pm – 1:00 pm Zoom:

Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York

AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.

AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.

Radiology-Wide Research Conference @ Zoom – Details can be found here:
Radiology-Wide Research Conference
Jul 16 @ 12:00 pm – 1:00 pm Zoom – Details can be found here:
Radiology-Wide Research Conference @ Zoom – Details can be found here:

Radiology Department-Wide Research Meeting

• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging

Location: Zoom – Details can be found here:
Meetings will be the 3rd Friday of each month.


Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology

2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream
2021 AIMI Symposium + BOLD-AIR Summit
Aug 3 @ 8:00 am – Aug 4 @ 3:00 pm Virtual Livestream
2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream

Stanford AIMI Director Curt Langlotz and Co-Directors Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on the latest, best research on the role of AI in diagnostic excellence across medicine, current areas of impact, fairness and societal impact, and translation and clinical implementation. The program includes talks, interactive panel discussions, and breakout sessions. Registration is free and open to all.


Also, the 2nd Annual BiOethics, the Law, and Data-sharing: AI in Radiology (BOLD-AIR) Summit will be held on August 4, in conjunction with the AIMI Symposium. The summit will convene a broad range of speakers in bioethics, law, regulation, industry groups, and patient safety and data privacy, to address the latest ethical, regulatory, and legal challenges regarding AI in radiology.



IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Models for Cancer Risk Assessment @ Zoom:
IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Models for Cancer Risk Assessment
Sep 22 @ 11:00 am – 12:00 pm Zoom:


Regina Barzilay, PhD
School of Engineering Distinguished Professor for AI and Health
Electrical Engineering and Computer Science Department
AI Faculty Lead at Jameel Clinic for Machine Learning in Health
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology

In this talk, I will present methods for future cancer risk from medical images. The discussion will explore alternative ways to formulate the risk assessment task and focus on algorithmic issues in developing such models. I will also discuss our experience in translating these algorithms into clinical practice in hospitals around the world.

2021 IBIIS & AIMI Virtual Retreat
Sep 27 @ 1:00 pm – 4:30 pm


Self-Supervision for Learning from the Bottom Up

Why do self-supervised learning? A common answer is: “because data labeling is expensive.” In this talk, I will argue that there are other, perhaps more fundamental reasons for working on self-supervision. First, it should allow us to get away from the tyranny of top-down semantic categorization and force meaningful associations to emerge naturally from the raw sensor data in a bottom-up fashion. Second, it should allow us to ditch fixed datasets and enable continuous, online learning, which is a much more natural setting for real-world agents. Third, and most intriguingly, there is hope that it might be possible to force a self-supervised task curriculum to emerge from first principles, even in the absence of a pre-defined downstream task or goal, similar to evolution. In this talk, I will touch upon these themes to argue that, far from running its course, research in self-supervised learning is only just beginning.

IBIIS & AIMI Seminar: Deep Learning for Histology Images Analysis @ Zoom:
IBIIS & AIMI Seminar: Deep Learning for Histology Images Analysis
Nov 17 @ 12:00 pm – 1:00 pm Zoom:

Saeed Hassanpour, PhD
Associate Professor of Biomedical Data Science
Associate Professor of Epidemiology
Associate Professor of Computer Science
Dartmouth Geisel School of Medicine

Deep Learning for Histology Images Analysis

With the recent expansions of whole-slide digital scanning, archiving, and high-throughput tissue banks, the field of digital pathology is primed to benefit significantly from deep learning technology. This talk will cover several applications of deep learning for characterizing histopathological patterns on high-resolution microscopy images for cancerous and precancerous lesions. Furthermore, the current challenges for building deep learning models for pathology image analysis will be discussed and new methodological advances to address these bottlenecks will be presented.


Dr. Saeed Hassanpour is an Associate Professor in the Departments of Biomedical Data Science, Computer Science, and Epidemiology at Dartmouth College. His research is focused on machine learning and multimodal data analysis for precision health. Dr. Hassanpour has led multiple NIH-funded research projects, which resulted in novel machine learning and deep learning models for medical image analysis and clinical text mining to improve diagnosis, prognosis, and personalized therapies. Before joining Dartmouth, he worked as a Research Engineer at Microsoft. Dr. Hassanpour received his Ph.D. in Electrical Engineering with a minor in Biomedical Informatics from Stanford University and completed his postdoctoral training at Stanford Center for Artificial Intelligence in Medicine & Imaging.

IBIIS & AIMI Seminar: Indrani Bhattacharya, PhD & Rogier van der Sluijs, PhD @ Zoom:
IBIIS & AIMI Seminar: Indrani Bhattacharya, PhD & Rogier van der Sluijs, PhD
Dec 15 @ 12:00 pm – 1:00 pm Zoom:

Indrani Bhattacharya, PhD
Postdoctoral Research Fellow
Department of Radiology
Stanford University

Title: Multimodal Data Fusion for Selective Identification of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging

Abstract: Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. This talk will cover multimodal and multi-scale fusion approaches to integrate radiology images, pathology images, and clinical domain knowledge about prostate cancer distribution to selectively identify and localize aggressive and indolent cancers on prostate MRI.

Rogier van der Sluijs, PhD
Postdoctoral Research Fellow
Department of Radiology
Stanford University

Title: Pretraining Neural Networks for Medical AI

Abstract: Transfer learning has quickly become standard practice for deep learning on medical images. Typically, practitioners repurpose existing neural networks and their corresponding weights to bootstrap model development. This talk will cover several methods to pretrain neural networks for medical tasks. The current challenges for pretraining neural networks in Radiology will be discussed and recent advancements that address these bottlenecks will be highlighted.

IBIIS & AIMI Seminar: AI In Clinical Use – Lessons Learned @ Zoom:
IBIIS & AIMI Seminar: AI In Clinical Use – Lessons Learned
Jan 19 @ 12:00 pm – 1:00 pm Zoom:

Nina Kottler, MD, MS
Associate Chief Medical Officer, Clinical AI
VP Clinical Operations
Radiology Partners

We have a call to action in healthcare – we need to drive value.  Artificial intelligence (AI), if deployed correctly, can help accomplish this lofty mission.  In this discussion we will review the following lessons learned in deploying radiology AI at scale:  4 unexpected benefits of implementing AI emergent finding triage; the importance of investing in AI radiologist education; how “most” AI needs to be incorporated into the radiologist workflow; why a platform is required to deploy AI at scale and what a modern platform looks like; how to use AI to add value to your data; and, as Dr. Curt Langlotz famously said, why rads (practices) who use AI will replace those who don’t (a depiction of what the role of the radiologist might look like in a tech enabled future).

Dr. Kottler has been a practicing radiologist specializing in emergency imaging for over 16 years.  Combining her clinical experience with a graduate degree in applied mathematics, she has been using technological innovation to drive value in radiology.  As the first radiologist to join Radiology Partners, Dr. Kottler has held multiple leadership positions within her practice and is currently the associate Chief Medical Officer for Clinical AI.  Externally Dr. Kottler serves on multiple committees for the ACR, RSNA, and SIIM.  Dr. Kottler is also passionate about promoting diversity and creating a culture of belonging.  As such she is a member of the AAWR, is a member of the diversity and inclusion committee at SIIM, serves on the steering committee for RAD=, and leads the education and development division of the Belonging Committee within Radiology Partners.