Calendar

Apr
22
Wed
2020
IBIIS/AIMI Seminar – Tiwari @ ZOOM - See Description for Zoom link
Apr 22 @ 1:00 pm – 2:00 pm
IBIIS/AIMI Seminar - Tiwari @ ZOOM - See Description for Zoom link

Radiomics and Radio-Genomics: Opportunities for Precision Medicine

Zoom: https://stanford.zoom.us/j/99904033216?pwd=U2tTdUp0YWtneTNUb1E4V2x0OTFMQT09 

Pallavi Tiwari, PhD
Assistant Professor of Biomedical Engineering
Associate Member, Case Comprehensive Cancer Center
Director of Brain Image Computing Laboratory
School of Medicine | Case Western Reserve University


Abstract:
In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on clinical applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude with a discussion on her lab’s findings in AI + experts, in the context of a clinically challenging problem of post-treatment response assessment on routine MRI scans.

Nov
18
Wed
2020
IBIIS & AIMI Seminar: Deep Tomographic Imaging @ Zoom: https://stanford.zoom.us/j/96731559276?pwd=WG5zcEFwSGlPcDRsOUFkVlRhcEs2Zz09
Nov 18 @ 12:00 pm – 1:00 pm

Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York

Abstract:
AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.

About:
AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.

Jun
3
Thu
2021
IMMERS – Stanford Medical Mixed Reality Panel Discussion Series @ Zoom
Jun 3 @ 9:00 am – 10:30 am
IMMERS - Stanford Medical Mixed Reality Panel Discussion Series @ Zoom

Join us for a panel on Behavioral XR on Thursday, June 3rd from 9:00 – 10:30 am PDT.  The event will start with a one-hour panel discussion featuring Dr. Elizabeth McMahon, a psychologist with a private practice in California; Sarah Hill of Healium, a company developing XR apps for mental fitness based in Missouri; Christian Angern of Sympatient, a company developing VR for anxiety therapy based in Germany; and Marguerite Manteau-Rao of Penumbra, a medical device company based in California.  This panel will be moderated by Dr. Walter Greenleaf of Stanford’s Virtual Human Interaction Lab (VHIL) and Dr. Christoph Leuze of the Stanford Medical Mixed Reality (SMMR) program.  Immediately following the panel discussion, you are also invited to a 30-minute interactive session with the panelists where questions and ideas can be explored in real time.

 

Register here to save your place now!  After registering, you will receive a confirmation email containing information about joining the meeting.

 

Please visit this page to subscribe to our events mailing list.

 

Sponsored by Stanford Medical Mixed Reality (SMMR)

Apr
17
Wed
2024
IBIIS & AIMI Seminar: Building Fair and Trustworthy AI for Healthcare @ Clark Center S360 - Zoom Details on IBIIS website
Apr 17 @ 12:00 pm – 1:00 pm

Roxana Daneshjou, MD, PhD
Assistant Professor, Biomedical Data Science & Dermatology
Assistant Director, Center of Excellence for Precision Heath & Pharmacogenomics
Director of Informatics, Stanford Skin Innovation and Interventional Research Group
Stanford University

Title: Building Fair and Trustworthy AI for Healthcare

Abstract: AI for healthcare has the potential to revolutionize how we practice medicine. However, to do this in a fair and trustworthy manner requires special attention to how AI models work and their potential biases. In this talk, I will cover the considerations for building AI systems that improve healthcare.

May
22
Wed
2024
IBIIS & AIMI Seminar: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine @ Clark Center S360 - Zoom Details on IBIIS website
May 22 @ 11:00 am – 12:00 pm

Mildred Cho, PhD
Professor of Pediatrics, Center of Biomedical Ethics
Professor of Medicine, Primary Care and Population Health
Stanford University

Title: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine

Abstract:
For the development of ethical machine learning (ML) for precision medicine, it is essential to understand how values play into the decision-making process of developers. We conducted five group design exercises with four developer participants each (N=20) who were asked to discuss and record their design considerations in a series of three hypothetical scenarios involving the design of a tool to predict progression to diabetes. In each group, the scenario was first presented as a research project, then as development of a clinical tool for a health care system, and finally as development of a clinical tool for their own health care system. Throughout, developers documented their process considerations using a virtual collaborative whiteboard platform. Our results suggest that developers more often considered client or user perspectives after changing the context of the scenario from research to a tool for a large healthcare setting. Furthermore, developers were more likely to express concerns arising from the patient perspective and societal and ethical issues such as protection of privacy after imagining themselves as patients in the health care system. Qualitative and quantitative data analysis also revealed that developers made reflective/reflexive statements more often in the third round of the design activity (44 times) than in the first (2) or second (6) rounds. These statements included statements on how the activity connected to their real-life work, what they could take away from the exercises and integrate into actual practice, and commentary on being patients within a health care system using AI. These findings suggest that ML developers can be encouraged to link the consequences of their actions to design choices by encouraging “empathy work” that directs them to take perspectives of specific stakeholder groups. This research could inform the creation of educational resources and exercises for developers to better align daily practices with stakeholder values and ethical ML design.

Jun
24
Mon
2024
IBIIS & AIMI Seminar: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs @ Clark Center S360 - Zoom Details on IBIIS website
Jun 24 @ 12:30 pm – 1:30 pm

Ifeoma Okoye MBBS, FWACS, FMCR 
Professor of Radiology and Director
University of Nigeria Centre for Clinical Trials
College of Medicine, University of Nigeria

Title: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs

Abstract
In this seminar I will be addressing the dire cancer survival outcomes in low- and middle-income countries (LMICs), with a particular focus on Sub-Saharan Africa. Cancer survival rates in Sub-Saharan Africa are alarmingly low. According to the World Health Organization, cancer deaths in LMICs account for approximately 70% of global cancer fatalities. In Nigeria, the five-year survival rate for breast cancer, one of the most common cancers, stands at a disheartening 10-30%, compared to over 80% in high-income countries. This stark disparity highlights the urgent need for sustained comprehensive cancer interventions in our region.

Here, I will discuss the pivotal role in the cancer control sphere, of a new software, ONCOSEEK, capable of early detecting 11 types of Cancers! It’s particular emphasis on the Patient Perspective, which aligns with our ethos of need for holistic patient care. In addition I will discuss recent developments on collaborative effort with the Gevaert lab at Stanford University and the University of Pennsylvania.

Sep
18
Wed
2024
IBIIS & AIMI Seminar – “GREEN: Generative Radiology Report Evaluation and Error Notation” & ” Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models” @ Clark Center S360 - Zoom Details on IBIIS website
Sep 18 @ 12:00 pm – 1:00 pm
Sophie Ostmeier

Sophie Ostmeier, MD
Postdoctoral Scholar
Department of Radiology
Stanford School of Medicine

Title: GREEN: Generative Radiology Report Evaluation and Error Notation

Abstract
Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.

Jeong Hoon Lee

Jeong Hoon Lee, PhD
Postdoctoral Researcher
Department of Radiology
Stanford School of Medicine

Title: Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models

Abstract:
Recent advancements in self-supervised learning (SSL), emerging as an effective approach for imaging foundation models, enable the effective pretraining of AI models across multiple domains without the need for labels. Despite the rapid advancements, their application in medical imaging remains challenging due to the subtle difference between cancer and normal tissue. To address this limitation, in this study, we propose an AI architecture ProViCNet that employs the vision transformer (ViT) based segmentation architecture with patch-level contrastive learning for better feature representation. We validated our model in prostate cancer detection tasks using three types of magnetic resonance imaging (MRI) across multiple centers. To evaluate the performance of feature representation in this model, we performed downstream tasks with respect to Gleason grade score and race prediction. Our model demonstrated significant performance improvements compared to the state-of-the-art segmentation architectures. This study proposes a novel approach to developing foundation models for prostate cancer imaging overcoming SSL limitations.

Oct
16
Wed
2024
IBIIS & AIMI Seminar: Medical Image Segmentation and Synthesis @ Clark Center S360 - Zoom Details on IBIIS website
Oct 16 @ 12:00 pm – 1:00 pm

Ipek Oguz, PhD
Assistant Professor of Computer Science
Assistant Professor of Electrical and Computer Engineering
Assistant Professor of Biomedical Engineering
Vanderbilt University

Title: Medical Image Segmentation and Synthesis

Abstract
Segmentation and synthesis are two fundamental tasks in medical image computing. Segmentation refers to the delineation of the boundaries of a structure of interest in the image, such as an organ, a tumor, or a lesion. Synthesis refers to images created computationally from other data; common examples include cross-modality synthesis and image denoising. This talk will provide an overview of my lab’s recent work in these two broad algorithmic directions in the context of a wide range of medical imaging applications. These driving clinical problems include MR imaging of the brain, OCT imaging of the retina, ultrasound imaging of the placenta, and endoscopic imaging of the kidney. I will also illustrate many problem formulations where synthesis can be used to help segmentation, and vice versa.