Calendar

Sep
16
Wed
2020
IBIIS & AIMI Seminar – Judy Gichoya, MD @ Zoom - See Description for Zoom Link
Sep 16 @ 12:00 pm – 1:00 pm
IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link

Judy Gichoya, MD
Assistant Professor
Emory University School of Medicine

Measuring Learning Gains in Man-Machine Assemblage When Augmenting Radiology Work with Artificial Intelligence

Abstract
The work setting of the future presents an opportunity for human-technology partnerships, where a harmonious connection between human-technology produces unprecedented productivity gains. A conundrum at this human-technology frontier remains – will humans be augmented by technology or will technology be augmented by humans? We present our work on overcoming the conundrum of human and machine as separate entities and instead, treats them as an assemblage. As groundwork for the harmonious human-technology connection, this assemblage needs to learn to fit synergistically. This learning is called assemblage learning and it will be important for Artificial Intelligence (AI) applications in health care, where diagnostic and treatment decisions augmented by AI will have a direct and significant impact on patient care and outcomes. We describe how learning can be shared between assemblages, such that collective swarms of connected assemblages can be created. Our work is to demonstrate a symbiotic learning assemblage, such that envisioned productivity gains from AI can be achieved without loss of human jobs.

Specifically, we are evaluating the following research questions: Q1: How to develop assemblages, such that human-technology partnerships produce a “good fit” for visually based cognition-oriented tasks in radiology? Q2: What level of training should pre-exist in the individual human (radiologist) and independent machine learning model for human-technology partnerships to thrive? Q3: Which aspects and to what extent does an assemblage learning approach lead to reduced errors, improved accuracy, faster turn-around times, reduced fatigue, improved self-efficacy, and resilience?

Zoom: https://stanford.zoom.us/j/93580829522?pwd=ZVAxTCtEdkEzMWxjSEQwdlp0eThlUT09

Oct
21
Wed
2020
SCIT Quarterly Seminar @ See description for ZOOM link
Oct 21 @ 10:00 am – 11:00 am

ZOOM LINK HERE

“High Resolution Breast Diffusion Weighted Imaging”
Jessica McKay, PhD

ABSTRACT: Diffusion-weighted imaging (DWI) is a quantitative MRI method that measures the apparent diffusion coefficient (ADC) of water molecules, which reflects cell density and serves as an indication of malignancy. Unfortunately, however, the clinical value of DWI is severely limited by the undesirable features in images that common clinical methods produce, including large geometric distortions, ghosting and chemical shift artifacts, and insufficient spatial resolution. Thus, in order to exploit information encoded in diffusion characteristics and fully assess the clinical value of ADC measurements, it is first imperative to achieve technical advancements of DWI.

In this talk, I will largely focus on the background of breast DWI, providing the clinical motivation for this work and explaining the current standard in breast DWI and alternatives proposed throughout the literature. I will also present my PhD dissertation work in which a novel strategy for high resolution breast DWI was developed. The purpose of this work is to improve DWI methods for breast imaging at 3 Tesla to robustly provide diffusion-weighted images and ADC maps with anatomical quality and resolution. This project has two major parts: Nyquist ghost correction and the use of simultaneous multislice imaging (SMS) to achieve high resolution. Exploratory work was completed to characterize the Nyquist ghost in breast DWI, showing that, although the ghost is mostly linear, the three-line navigator is unreliable, especially in the presence of fat. A novel referenceless ghost correction, Ghost/Object minimization was developed that reduced the ghost in standard SE-EPI and advanced SMS. An advanced SMS method with axial reformatting (AR) is presented for high resolution breast DWI. In a reader study, AR-SMS was preferred by three breast radiologists compared to the standard SE-EPI and readout-segmented-EPI.


“Machine-learning Approach to Differentiation of Benign and Malignant Peripheral Nerve Sheath Tumors: A Multicenter Study”

Michael Zhang, MD

ABSTRACT: Clinicoradiologic differentiation between benign and malignant peripheral nerve sheath tumors (PNSTs) is a diagnostic challenge with important management implications. We sought to develop a radiomics classifier based on 900 features extracted from gadolinium-enhanced, T1-weighted MRI, using the Quantitative Imaging Feature Pipeline and the PyRadiomics package. Additional patient-specific clinical variables were recorded. A radiomic signature was derived from least absolute shrinkage and selection operator, followed by gradient boost machine learning. A training and test set were selected randomly in a 70:30 ratio. We further evaluated the performance of radiomics-based classifier models against human readers of varying medical-training backgrounds. Following image pre-processing, 95 malignant and 171 benign PNSTs were available. The final classifier included 21 features and achieved a sensitivity 0.676, specificity 0.882, and area under the curve (AUC) 0.845. Collectively, human readers achieved sensitivity 0.684, specificity 0.742, and AUC 0.704. We concluded that radiomics using routine gadolinium enhanced, T1-weighted MRI sequences and clinical features can aid in the evaluation of PNSTs, particularly by increasing specificity for diagnosing malignancy. Further improvement may be achieved with incorporation of additional imaging sequences.

Nov
18
Wed
2020
IBIIS & AIMI Seminar: Deep Tomographic Imaging @ Zoom: https://stanford.zoom.us/j/96731559276?pwd=WG5zcEFwSGlPcDRsOUFkVlRhcEs2Zz09
Nov 18 @ 12:00 pm – 1:00 pm

Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York

Abstract:
AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.

About:
AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.

Apr
30
Fri
2021
Racial Equity Challenge: Race in society @ Zoom
Apr 30 @ 12:00 pm – 1:00 pm
Racial Equity Challenge: Race in society @ Zoom

Targeted violence continues against Black Americans, Asian Americans, and all people of color. The department of radiology diversity committee is running a racial equity challenge to raise awareness of systemic racism, implicit bias and related issues. Participants will be provided a list of resources on these topics such as articles, podcasts, videos, etc., from which they can choose, with the “challenge” of engaging with one to three media sources prior to our session (some videos are as short as a few minutes). Participants will meet in small-group breakout sessions to discuss what they’ve learned and share ideas.

Please reach out to Marta Flory, flory@stanford.edu with questions. For details about the session, including recommended resources and the Zoom link, please reach out to Meke Faaoso at mfaaoso@stanford.edu.

Jul
16
Fri
2021
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu
Jul 16 @ 12:00 pm – 1:00 pm
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu

Radiology Department-Wide Research Meeting

• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging

Location: Zoom – Details can be found here: https://radresearch.stanford.edu
Meetings will be the 3rd Friday of each month.

 

Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology

Apr
17
Wed
2024
IBIIS & AIMI Seminar: Building Fair and Trustworthy AI for Healthcare @ Clark Center S360 - Zoom Details on IBIIS website
Apr 17 @ 12:00 pm – 1:00 pm

Roxana Daneshjou, MD, PhD
Assistant Professor, Biomedical Data Science & Dermatology
Assistant Director, Center of Excellence for Precision Heath & Pharmacogenomics
Director of Informatics, Stanford Skin Innovation and Interventional Research Group
Stanford University

Title: Building Fair and Trustworthy AI for Healthcare

Abstract: AI for healthcare has the potential to revolutionize how we practice medicine. However, to do this in a fair and trustworthy manner requires special attention to how AI models work and their potential biases. In this talk, I will cover the considerations for building AI systems that improve healthcare.

May
22
Wed
2024
IBIIS & AIMI Seminar: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine @ Clark Center S360 - Zoom Details on IBIIS website
May 22 @ 11:00 am – 12:00 pm

Mildred Cho, PhD
Professor of Pediatrics, Center of Biomedical Ethics
Professor of Medicine, Primary Care and Population Health
Stanford University

Title: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine

Abstract:
For the development of ethical machine learning (ML) for precision medicine, it is essential to understand how values play into the decision-making process of developers. We conducted five group design exercises with four developer participants each (N=20) who were asked to discuss and record their design considerations in a series of three hypothetical scenarios involving the design of a tool to predict progression to diabetes. In each group, the scenario was first presented as a research project, then as development of a clinical tool for a health care system, and finally as development of a clinical tool for their own health care system. Throughout, developers documented their process considerations using a virtual collaborative whiteboard platform. Our results suggest that developers more often considered client or user perspectives after changing the context of the scenario from research to a tool for a large healthcare setting. Furthermore, developers were more likely to express concerns arising from the patient perspective and societal and ethical issues such as protection of privacy after imagining themselves as patients in the health care system. Qualitative and quantitative data analysis also revealed that developers made reflective/reflexive statements more often in the third round of the design activity (44 times) than in the first (2) or second (6) rounds. These statements included statements on how the activity connected to their real-life work, what they could take away from the exercises and integrate into actual practice, and commentary on being patients within a health care system using AI. These findings suggest that ML developers can be encouraged to link the consequences of their actions to design choices by encouraging “empathy work” that directs them to take perspectives of specific stakeholder groups. This research could inform the creation of educational resources and exercises for developers to better align daily practices with stakeholder values and ethical ML design.

Jun
24
Mon
2024
IBIIS & AIMI Seminar: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs @ Clark Center S360 - Zoom Details on IBIIS website
Jun 24 @ 12:30 pm – 1:30 pm

Ifeoma Okoye MBBS, FWACS, FMCR 
Professor of Radiology and Director
University of Nigeria Centre for Clinical Trials
College of Medicine, University of Nigeria

Title: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs

Abstract
In this seminar I will be addressing the dire cancer survival outcomes in low- and middle-income countries (LMICs), with a particular focus on Sub-Saharan Africa. Cancer survival rates in Sub-Saharan Africa are alarmingly low. According to the World Health Organization, cancer deaths in LMICs account for approximately 70% of global cancer fatalities. In Nigeria, the five-year survival rate for breast cancer, one of the most common cancers, stands at a disheartening 10-30%, compared to over 80% in high-income countries. This stark disparity highlights the urgent need for sustained comprehensive cancer interventions in our region.

Here, I will discuss the pivotal role in the cancer control sphere, of a new software, ONCOSEEK, capable of early detecting 11 types of Cancers! It’s particular emphasis on the Patient Perspective, which aligns with our ethos of need for holistic patient care. In addition I will discuss recent developments on collaborative effort with the Gevaert lab at Stanford University and the University of Pennsylvania.

Sep
18
Wed
2024
IBIIS & AIMI Seminar – “GREEN: Generative Radiology Report Evaluation and Error Notation” & ” Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models” @ Clark Center S360 - Zoom Details on IBIIS website
Sep 18 @ 12:00 pm – 1:00 pm
Sophie Ostmeier

Sophie Ostmeier, MD
Postdoctoral Scholar
Department of Radiology
Stanford School of Medicine

Title: GREEN: Generative Radiology Report Evaluation and Error Notation

Abstract
Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.

Jeong Hoon Lee

Jeong Hoon Lee, PhD
Postdoctoral Researcher
Department of Radiology
Stanford School of Medicine

Title: Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models

Abstract:
Recent advancements in self-supervised learning (SSL), emerging as an effective approach for imaging foundation models, enable the effective pretraining of AI models across multiple domains without the need for labels. Despite the rapid advancements, their application in medical imaging remains challenging due to the subtle difference between cancer and normal tissue. To address this limitation, in this study, we propose an AI architecture ProViCNet that employs the vision transformer (ViT) based segmentation architecture with patch-level contrastive learning for better feature representation. We validated our model in prostate cancer detection tasks using three types of magnetic resonance imaging (MRI) across multiple centers. To evaluate the performance of feature representation in this model, we performed downstream tasks with respect to Gleason grade score and race prediction. Our model demonstrated significant performance improvements compared to the state-of-the-art segmentation architectures. This study proposes a novel approach to developing foundation models for prostate cancer imaging overcoming SSL limitations.

Oct
16
Wed
2024
IBIIS & AIMI Seminar: Medical Image Segmentation and Synthesis @ Clark Center S360 - Zoom Details on IBIIS website
Oct 16 @ 12:00 pm – 1:00 pm

Ipek Oguz, PhD
Assistant Professor of Computer Science
Assistant Professor of Electrical and Computer Engineering
Assistant Professor of Biomedical Engineering
Vanderbilt University

Title: Medical Image Segmentation and Synthesis

Abstract
Segmentation and synthesis are two fundamental tasks in medical image computing. Segmentation refers to the delineation of the boundaries of a structure of interest in the image, such as an organ, a tumor, or a lesion. Synthesis refers to images created computationally from other data; common examples include cross-modality synthesis and image denoising. This talk will provide an overview of my lab’s recent work in these two broad algorithmic directions in the context of a wide range of medical imaging applications. These driving clinical problems include MR imaging of the brain, OCT imaging of the retina, ultrasound imaging of the placenta, and endoscopic imaging of the kidney. I will also illustrate many problem formulations where synthesis can be used to help segmentation, and vice versa.