Hoifung Poon, PhD
General Manager at Health Futures of Microsoft Research
Affiliated Professor at the University of Washington Medical School.
Title: Advancing Health at the Speed of AI
Abstract: The dream of precision health is to develop a data-driven, continuous learning system where new health information is instantly incorporated to optimize care delivery and accelerate biomedical discovery. In reality, however, the health ecosystem is plagued by overwhelming unstructured data and unscalable manual processing. Self-supervised AI such as large language models (LLMs) can supercharge structuring of biomedical data and accelerate transformation towards precision health. In this talk, I’ll present our research progress on biomedical AI for precision health, spanning biomedical LLMs, multi-modal learning, and causal discovery. This enables us to extract knowledge from tens of millions of publications, structure real-world data for millions of cancer patients, and apply the extracted knowledge and real-world evidence to advancing precision oncology in deep partnerships with real-world stakeholders.
Despina Kontos, PhD
Matthew J. Wilson Professor of Research Radiology II
Associate Vice-Chair for Research, Department of Radiology
Perelman School of Medicine
University of Pennsylvania
Title: Radiomics and Radiogenomics: The Role of Imaging, Machine Learning, and AI, as a Biomarker for Cancer Prognostication and Therapy Response Evaluation
Abstract: Cancer is a heterogeneous disease, with known inter-tumor and intra-tumor heterogeneity in solid tumors. Established histopathologic prognostic biomarkers generally acquired from a tumor biopsy may be limited by sampling variation. Radiomics is an emerging field with the potential to leverage the whole tumor via non-invasive sampling afforded by medical imaging to extract high throughput, quantitative features for personalized tumor characterization. Identifying imaging phenotypes via radiomics analysis and understanding their relationship with prognostic markers and patient outcomes can allow for a non-invasive assessment of tumor heterogeneity. Recent studies have shown that intrinsic radiomic phenotypes of tumor heterogeneity for cancer may have independent prognostic value when predicting disease aggressiveness and recurrence. The independent prognostic value of imaging heterogeneity phenotypes suggests that radiogenomic phenotypes can provide a non-invasive characterization of tumor heterogeneity to augment genomic assays in precision prognosis and treatment.
Daguang Xu, PhD
Senior Research Manager
NVIDIA Healthcare
Title: Industrial Applied Research in Healthcare and Federated Learning at NVIDIA
Abstract: As the market leader in deep learning and parallel computing, NVIDIA is fully committed to advancing applied research in medical imaging. Our goal is to revolutionize the capabilities of medical doctors and radiologists by equipping them with powerful tools and applications based on deep learning. We firmly believe that the integration of deep learning and accelerated AI will have a profound impact on the life sciences, medicine, and the healthcare industry as a whole. To drive this transformative process, NVIDIA is actively democratizing deep learning through the provision of a comprehensive AI computing platform specifically designed for the healthcare community. These GPU-accelerated solutions not only promote collaboration but also prioritize the security of each institution’s information. By doing so, we are fostering a collective effort in harnessing the potential of deep learning to benefit healthcare.
During this talk, I will showcase remarkable research achievements accomplished by NVIDIA’s deep learning in medical imaging team. This includes breakthroughs in segmentation, self-supervised learning, federated learning, and other related areas. Additionally, I will provide insights into the exciting avenues of research that our team is currently exploring.
Negar Golestani, PhD
Postdoctoral Research Fellow
Department of Radiology
Stanford University
Title: AI in Radiology-Pathology Fusion Towards Precise Breast Cancer Detection
Abstract: Breast cancer is a global public health concern with various treatment options based on tumor characteristics. Pathological examination of excised tissue after surgery provides important information for treatment decisions. This pathology processing involving the manual selection of representative sections for histological examination is time-consuming and subjective, which can lead to potential sampling errors. Accurately identifying residual tumors is a challenging task, which highlights the need for systematic or assisted methods. Radiology-pathology registration is essential for developing deep-learning algorithms to automate cancer detection on radiology images. However, aligning faxitron and histopathology images is difficult due to content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We propose a novel deep learning-based pipeline for affine registration of faxitron images (x-ray representations of macrosections of ex-vivo breast tissue) with their corresponding histopathology images. Our model combines convolutional neural networks (CNN) and vision transformers (ViT), capturing local and global information from the entire tissue macrosection and its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To overcome the limitations of multi-modal ground truth data, we train the model using synthetic mono-modal data in a weakly supervised manner. The trained model successfully performs multi-modal registration, outperforms existing baselines, including deep learning-based and iterative models, and is approximately 200 times faster than the iterative approach. The application of proposed registration method allows for the precise mapping of pathology labels onto radiology images, thereby establishing ground truth labels for training classification and detection models on radiological data. This work bridges the gap in current research and clinical workflow, offering potential improvements in efficiency and accuracy for breast cancer evaluation and streamlining pathology workflow.
Jean Benoit Delbrouck, PhD
Research Scientist
Department of Radiology
Stanford University
Title: Generating Accurate and Factually Correct Medical Text
Abstract: Generating factually correct medical text is of utmost importance due to several reasons. Firstly, patient safety is heavily dependent on accurate information as medical decisions are often made based on the information provided. Secondly, trust in AI as a reliable tool in the medical field is essential, and this trust can only be established by generating accurate and reliable medical text. Lastly, medical research also relies heavily on accurate information for meaningful results.
Recent studies have explored new approaches for generating medical text from images or findings, ranging from pretraining to Reinforcement Learning, and leveraging expert annotations. However, a potential game changer in the field is the integration of GPT models in pipelines for generating factually correct medical text for research or production purposes.
Bram van Ginneken, PhD
Professor of Medical Image Analysis
Chair of the Diagnostic Image Analysis Group
Radboud University Medical Center
Title: Why AI Should Replace Radiologists
Abstract:
In this talk, I will provide arguments for the thesis that nearly all diagnostic radiology could be performed by computers and that the notion that AI will not replace radiologists is only temporarily true. Some well-known and lesser-known examples of AI systems analyzing medical images with a stand-alone performance on par or beyond human experts will be presented. I will show that systems built by academia, in collaborative efforts, may even outperform commercially available systems. Next, I will sketch a way forward to implement automated diagnostic radiology and argue that this is needed to keep healthcare affordable in societies wrestling with aging populations. Some pitfalls, like excessive demands for trials, will be discussed. The key to success is to convince radiologists to take the lead in this process. They need to collaborate with AI developers, but AI developers and the medical device industry should not lead this process. Radiologists should, in fact, stop training radiologists, and instead, start training machines.
Andrey Fedorov, PhD
Associate Professor, Harvard Medical School
Lead Investigator, Brigham and Women’s Hospital
Title: NCI Imaging Data Commons:Towards Transparency, Reproducibility, and Scalability in Imaging AI
Abstract
The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts over 50 TB of diverse publicly available cancer image data spanning radiology and microscopy domains. By harmonizing all data based on industry standards and colocalizing it with analysis and exploration resources, IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. The presentation will discuss the recent developments in IDC, highlighting resources, demonstrations and examples that we hope can help you improve your everyday imaging research practices – both those that use public and internal datasets.
Roxana Daneshjou, MD, PhD
Assistant Professor, Biomedical Data Science & Dermatology
Assistant Director, Center of Excellence for Precision Heath & Pharmacogenomics
Director of Informatics, Stanford Skin Innovation and Interventional Research Group
Stanford University
Title: Building Fair and Trustworthy AI for Healthcare
Abstract: AI for healthcare has the potential to revolutionize how we practice medicine. However, to do this in a fair and trustworthy manner requires special attention to how AI models work and their potential biases. In this talk, I will cover the considerations for building AI systems that improve healthcare.
Mildred Cho, PhD
Professor of Pediatrics, Center of Biomedical Ethics
Professor of Medicine, Primary Care and Population Health
Stanford University
Title: Facilitating Patient and Clinician Value Considerations into AI for Precision Medicine
Abstract:
For the development of ethical machine learning (ML) for precision medicine, it is essential to understand how values play into the decision-making process of developers. We conducted five group design exercises with four developer participants each (N=20) who were asked to discuss and record their design considerations in a series of three hypothetical scenarios involving the design of a tool to predict progression to diabetes. In each group, the scenario was first presented as a research project, then as development of a clinical tool for a health care system, and finally as development of a clinical tool for their own health care system. Throughout, developers documented their process considerations using a virtual collaborative whiteboard platform. Our results suggest that developers more often considered client or user perspectives after changing the context of the scenario from research to a tool for a large healthcare setting. Furthermore, developers were more likely to express concerns arising from the patient perspective and societal and ethical issues such as protection of privacy after imagining themselves as patients in the health care system. Qualitative and quantitative data analysis also revealed that developers made reflective/reflexive statements more often in the third round of the design activity (44 times) than in the first (2) or second (6) rounds. These statements included statements on how the activity connected to their real-life work, what they could take away from the exercises and integrate into actual practice, and commentary on being patients within a health care system using AI. These findings suggest that ML developers can be encouraged to link the consequences of their actions to design choices by encouraging “empathy work” that directs them to take perspectives of specific stakeholder groups. This research could inform the creation of educational resources and exercises for developers to better align daily practices with stakeholder values and ethical ML design.
Ifeoma Okoye MBBS, FWACS, FMCR
Professor of Radiology and Director
University of Nigeria Centre for Clinical Trials
College of Medicine, University of Nigeria
Title: Deepening Collaboration with Stanford & Pennsylvania, Toward Developing Joint Strategies to Close the ‘Cancer Care’ & ‘Clinical Trial Volume’ Gap in LMICs
Abstract
In this seminar I will be addressing the dire cancer survival outcomes in low- and middle-income countries (LMICs), with a particular focus on Sub-Saharan Africa. Cancer survival rates in Sub-Saharan Africa are alarmingly low. According to the World Health Organization, cancer deaths in LMICs account for approximately 70% of global cancer fatalities. In Nigeria, the five-year survival rate for breast cancer, one of the most common cancers, stands at a disheartening 10-30%, compared to over 80% in high-income countries. This stark disparity highlights the urgent need for sustained comprehensive cancer interventions in our region.
Here, I will discuss the pivotal role in the cancer control sphere, of a new software, ONCOSEEK, capable of early detecting 11 types of Cancers! It’s particular emphasis on the Patient Perspective, which aligns with our ethos of need for holistic patient care. In addition I will discuss recent developments on collaborative effort with the Gevaert lab at Stanford University and the University of Pennsylvania.
Sophie Ostmeier, MD
Postdoctoral Scholar
Department of Radiology
Stanford School of Medicine
Title: GREEN: Generative Radiology Report Evaluation and Error Notation
Abstract
Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.
Jeong Hoon Lee, PhD
Postdoctoral Researcher
Department of Radiology
Stanford School of Medicine
Title: Leveraging Patch-Level Representation Learning with Vision Transformer for Prostate Cancer Foundation Models
Abstract:
Recent advancements in self-supervised learning (SSL), emerging as an effective approach for imaging foundation models, enable the effective pretraining of AI models across multiple domains without the need for labels. Despite the rapid advancements, their application in medical imaging remains challenging due to the subtle difference between cancer and normal tissue. To address this limitation, in this study, we propose an AI architecture ProViCNet that employs the vision transformer (ViT) based segmentation architecture with patch-level contrastive learning for better feature representation. We validated our model in prostate cancer detection tasks using three types of magnetic resonance imaging (MRI) across multiple centers. To evaluate the performance of feature representation in this model, we performed downstream tasks with respect to Gleason grade score and race prediction. Our model demonstrated significant performance improvements compared to the state-of-the-art segmentation architectures. This study proposes a novel approach to developing foundation models for prostate cancer imaging overcoming SSL limitations.