Calendar

Jun
24
Mon
2024
IBIIS & AIMI Seminar: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs @ Clark Center S360 - Zoom Details on IBIIS website
Jun 24 @ 12:30 pm – 1:30 pm

Ifeoma Okoye MBBS, FWACS, FMCR 
Professor of Radiology and Director
University of Nigeria Centre for Clinical Trials
College of Medicine, University of Nigeria

Title: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs

Abstract
In this seminar I will be addressing the dire cancer survival outcomes in low- and middle-income countries (LMICs), with a particular focus on Sub-Saharan Africa. Cancer survival rates in Sub-Saharan Africa are alarmingly low. According to the World Health Organization, cancer deaths in LMICs account for approximately 70% of global cancer fatalities. In Nigeria, the five-year survival rate for breast cancer, one of the most common cancers, stands at a disheartening 10-30%, compared to over 80% in high-income countries. This stark disparity highlights the urgent need for sustained comprehensive cancer interventions in our region.

Here, I will discuss the pivotal role in the cancer control sphere, of a new software, ONCOSEEK, capable of early detecting 11 types of Cancers! It’s particular emphasis on the Patient Perspective, which aligns with our ethos of need for holistic patient care. In addition I will discuss recent developments on collaborative effort with the Gevaert lab at Stanford University and the University of Pennsylvania.

Sep
18
Wed
2024
IBIIS & AIMI Seminar – “GREEN: Generative Radiology Report Evaluation and Error Notation” & ” Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models” @ Clark Center S360 - Zoom Details on IBIIS website
Sep 18 @ 12:00 pm – 1:00 pm
Sophie Ostmeier

Sophie Ostmeier, MD
Postdoctoral Scholar
Department of Radiology
Stanford School of Medicine

Title: GREEN: Generative Radiology Report Evaluation and Error Notation

Abstract
Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.

Jeong Hoon Lee

Jeong Hoon Lee, PhD
Postdoctoral Researcher
Department of Radiology
Stanford School of Medicine

Title: Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models

Abstract:
Recent advancements in self-supervised learning (SSL), emerging as an effective approach for imaging foundation models, enable the effective pretraining of AI models across multiple domains without the need for labels. Despite the rapid advancements, their application in medical imaging remains challenging due to the subtle difference between cancer and normal tissue. To address this limitation, in this study, we propose an AI architecture ProViCNet that employs the vision transformer (ViT) based segmentation architecture with patch-level contrastive learning for better feature representation. We validated our model in prostate cancer detection tasks using three types of magnetic resonance imaging (MRI) across multiple centers. To evaluate the performance of feature representation in this model, we performed downstream tasks with respect to Gleason grade score and race prediction. Our model demonstrated significant performance improvements compared to the state-of-the-art segmentation architectures. This study proposes a novel approach to developing foundation models for prostate cancer imaging overcoming SSL limitations.

Oct
16
Wed
2024
IBIIS & AIMI Seminar: Medical Image Segmentation and Synthesis @ Clark Center S360 - Zoom Details on IBIIS website
Oct 16 @ 12:00 pm – 1:00 pm

Ipek Oguz, PhD
Assistant Professor of Computer Science
Assistant Professor of Electrical and Computer Engineering
Assistant Professor of Biomedical Engineering
Vanderbilt University

Title: Medical Image Segmentation and Synthesis

Abstract
Segmentation and synthesis are two fundamental tasks in medical image computing. Segmentation refers to the delineation of the boundaries of a structure of interest in the image, such as an organ, a tumor, or a lesion. Synthesis refers to images created computationally from other data; common examples include cross-modality synthesis and image denoising. This talk will provide an overview of my lab’s recent work in these two broad algorithmic directions in the context of a wide range of medical imaging applications. These driving clinical problems include MR imaging of the brain, OCT imaging of the retina, ultrasound imaging of the placenta, and endoscopic imaging of the kidney. I will also illustrate many problem formulations where synthesis can be used to help segmentation, and vice versa.