Calendar

Nov
15
Wed
2023
IBIIS & AIMI Seminar: Why AI Should Replace Radiologists @ ZOOM: https://stanford.zoom.us/j/97076943141?pwd=Z2E5eGtaUDdNVklEYVNpcDJzcy9sdz09
IBIIS & AIMI Seminar: Why AI Should Replace Radiologists
Nov 15 @ 9:00 am – 10:00 am ZOOM: https://stanford.zoom.us/j/97076943141?pwd=Z2E5eGtaUDdNVklEYVNpcDJzcy9sdz09

Bram van Ginneken, PhD
Professor of Medical Image Analysis
Chair of the Diagnostic Image Analysis Group
Radboud University Medical Center

Title: Why AI Should Replace Radiologists

Abstract:
In this talk, I will provide arguments for the thesis that nearly all diagnostic radiology could be performed by computers and that the notion that AI will not replace radiologists is only temporarily true. Some well-known and lesser-known examples of AI systems analyzing medical images with a stand-alone performance on par or beyond human experts will be presented. I will show that systems built by academia, in collaborative efforts, may even outperform commercially available systems. Next, I will sketch a way forward to implement automated diagnostic radiology and argue that  this is needed to keep healthcare affordable in societies wrestling with aging populations. Some pitfalls, like excessive demands for trials, will be discussed. The key to success is to convince radiologists to take the lead in this process. They need to collaborate with AI developers, but AI developers and the medical device industry should not lead this process. Radiologists should, in fact, stop training radiologists, and instead, start training machines.

Mar
20
Wed
2024
IBIIS & AIMI Seminar - NCI Imaging Data Commons: Towards Transparency, Reproducibility, and Scalability in Imaging AI @ Clark Center S360 - Zoom Details on IBIIS website
IBIIS & AIMI Seminar – NCI Imaging Data Commons: Towards Transparency, Reproducibility, and Scalability in Imaging AI
Mar 20 @ 12:00 pm – 1:00 pm Clark Center S360 - Zoom Details on IBIIS website

Andrey Fedorov, PhD 
Associate Professor, Harvard Medical School
Lead Investigator, Brigham and Women’s Hospital

Title: NCI Imaging Data Commons:Towards Transparency, Reproducibility, and Scalability in Imaging AI

Abstract
The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires  easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts over 50 TB of diverse publicly available cancer image data spanning radiology and microscopy domains. By harmonizing all  data based on industry standards and colocalizing it with analysis and exploration resources, IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and  transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected  by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and  cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. The presentation will discuss the recent developments in IDC, highlighting resources, demonstrations and examples that we hope can help you improve your everyday imaging research practices – both those that use public and internal datasets.

Apr
17
Wed
2024
IBIIS & AIMI Seminar: Building Fair and Trustworthy AI for Healthcare @ Clark Center S360 - Zoom Details on IBIIS website
IBIIS & AIMI Seminar: Building Fair and Trustworthy AI for Healthcare
Apr 17 @ 12:00 pm – 1:00 pm Clark Center S360 - Zoom Details on IBIIS website

Roxana Daneshjou, MD, PhD
Assistant Professor, Biomedical Data Science & Dermatology
Assistant Director, Center of Excellence for Precision Heath & Pharmacogenomics
Director of Informatics, Stanford Skin Innovation and Interventional Research Group
Stanford University

Title: Building Fair and Trustworthy AI for Healthcare

Abstract: AI for healthcare has the potential to revolutionize how we practice medicine. However, to do this in a fair and trustworthy manner requires special attention to how AI models work and their potential biases. In this talk, I will cover the considerations for building AI systems that improve healthcare.

May
22
Wed
2024
IBIIS & AIMI Seminar:  Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine @ Clark Center S360 - Zoom Details on IBIIS website
IBIIS & AIMI Seminar: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine
May 22 @ 11:00 am – 12:00 pm Clark Center S360 - Zoom Details on IBIIS website

Mildred Cho, PhD
Professor of Pediatrics, Center of Biomedical Ethics
Professor of Medicine, Primary Care and Population Health
Stanford University

Title: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine

Abstract:
For the development of ethical machine learning (ML) for precision medicine, it is essential to understand how values play into the decision-making process of developers. We conducted five group design exercises with four developer participants each (N=20) who were asked to discuss and record their design considerations in a series of three hypothetical scenarios involving the design of a tool to predict progression to diabetes. In each group, the scenario was first presented as a research project, then as development of a clinical tool for a health care system, and finally as development of a clinical tool for their own health care system. Throughout, developers documented their process considerations using a virtual collaborative whiteboard platform. Our results suggest that developers more often considered client or user perspectives after changing the context of the scenario from research to a tool for a large healthcare setting. Furthermore, developers were more likely to express concerns arising from the patient perspective and societal and ethical issues such as protection of privacy after imagining themselves as patients in the health care system. Qualitative and quantitative data analysis also revealed that developers made reflective/reflexive statements more often in the third round of the design activity (44 times) than in the first (2) or second (6) rounds. These statements included statements on how the activity connected to their real-life work, what they could take away from the exercises and integrate into actual practice, and commentary on being patients within a health care system using AI. These findings suggest that ML developers can be encouraged to link the consequences of their actions to design choices by encouraging “empathy work” that directs them to take perspectives of specific stakeholder groups. This research could inform the creation of educational resources and exercises for developers to better align daily practices with stakeholder values and ethical ML design.