Calendar

Jan
15
Wed
2020
AIMI & IBIIS Seminar – Wei Shao, PhD & Saeed Seyyedi, PhD @ Clark Center - S360
Jan 15 @ 12:00 pm – 1:00 pm
AIMI & IBIIS Seminar - Wei Shao, PhD & Saeed Seyyedi, PhD @ Clark Center - S360

“A Deep Learning Framework for Efficient Registration of MRI and Histopathology Images of the Prostate”

Wei Shao, PhD
Postdoctoral Research Fellow
Department of Radiology
Stanford University

“Applications of Generative Adversarial Networks (GANs) in Medical Imaging”

Saeed Seyyedi, PhD
Paustenbach Research Fellow
Department of Radiology
Stanford University

Join via Zoom: https://stanford.zoom.us/j/593016899

Refreshments will be provided

ABSTRACT (Shao)
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, MRI interpretation suffers from high interobserver variability and often misses clinically significant cancers. Registration of histopathology images from patients who have undergone surgical resection of the prostate onto pre-operative MRI images allows direct mapping of cancer location onto MR images. This is essential for the discovery and validation of novel prostate cancer signatures on MRI. Traditional registration approaches can be computationally expensive and require a careful choice of registration hyperparameters. We present a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of preprocessing, transform estimation by deep neural networks, and postprocessing. We refined the registration neural networks, originally trained with 19,642 natural images, by adding 17,821 medical images of the prostate to the training set. The pipeline was evaluated using 99 prostate cancer patients. The addition of the images to the training set significantly (p < 0.001) improved the Dice coefficient and reduced the Hausdorff distance. Our pipeline also achieved comparable accuracy to an existing state-of-the-art algorithm while reducing the computation time from 4.4 minutes to less than 2 seconds.

ABSTRACT (Seyyedi)
Generative adversarial networks (GANs) are advanced types of neural networks where two networks are trained simultaneously to perform two tasks of generation and discrimination. GANs have gained a lot of attention to tackle well known and challenging problems in computer vision applications including medical image analysis tasks such as medical image de-noising, detection and classification, segmentation and reconstruction.In this talk, we will introduce some of the recent advancements of GANs in medical imaging applications and will discuss the recent developments of GAN models to resolve real world imaging challenges.

Jan
29
Wed
2020
SCIT Seminar: Muna Aryal Rizal, PhD and Eduardo Somoza, MD @ Glazer Learning Center (Lucas P083)
Jan 29 @ 10:00 am – 11:00 am
SCIT Seminar: Muna Aryal Rizal, PhD and Eduardo Somoza, MD @ Glazer Learning Center (Lucas P083)

Muna Aryal Rizal, PhD
Mentor: Jeremy Dahl, PhD and Raag Airan, MD, PhD

Noninvasive Focused Ultrasound Accelerates Glymphatic Transport to Bypass the Blood-Brain Barrier

ABSTRACT

Recent advancement in neuroscience revealed that the Central Nervous System (CNS) comprise glial-cell driven lymphatic system and coined the term called “Glymphatic pathway” by Neuroscientist, Maiden Nedergaard. Furthermore, it has been proven in rodent and non-human primate studies that the glymphatic exchange efficacy can decay in healthy aging, alzheimer’s disease models, traumatic brain injury, cerebral hemorrhage, and stroke. Studies in rodents have also shown that the glymphatic function can accelerate by doing easily-implemented, interventions like physical exercise, changes in body posture during sleep, intake of omega-3 polyunsaturated fatty acids, and low dose alcohol (0.5 g/kg). Here, we proposed for the first time to accelerate the glymphatic function by manipulating the whole-brain ultrasonically using focused ultrasound, an emerging clinical technology that can noninvasively reach virtually throughout the brain. During this SCIT seminar, I will introduce the new ultrasonic approach to accelerates glymphatic transport and will share some preliminary findings.


Eduardo Somoza, MD
Mentor: Sandy Napel, PhD

Prediction of Clinical Outcomes in Diffuse Large B-Cell Lymphoma (DLBCL) Utilizing Radiomic Features Derived from Pretreatment Positron Emission Tomography (PET) Scan

ABSTRACT

Diffuse Large B-Cell lymphoma (DLBCL) is the most common type of lymphoma, accounting for a third of cases worldwide. Despite advancements in treatment, the five-year percent survival for this patient population is around sixty percent. This indicates a clinical need for being able to predict outcomes before the initiation of standard treatment. The approach we will be employing to address this need is the creation of a prognostic model from pretreatment clinical data of DLBCL patients seen at Stanford University Medical Center. In particular, there will be a focus on the derivation of radiomic features from pretreatment positron emission tomography (PET) scans as this has not been thoroughly investigated in similar published research efforts. We will layout the framework for our approach, with an emphasis on the aspects of our design that will allow for the translation of our efforts to multiple clinical settings. More importantly, we will discuss the importance and challenges of assembling a quality clinical database for this type of research. Ultimately, we hope our efforts will lead to the development of a prognostic model that can be utilized to guide treatment in DLBCL patients with refractory disease and/or high risk of relapse after completion of standard treatment.

Apr
22
Wed
2020
SCIT Quarterly Seminar @ Zoom: https://stanford.zoom.us/j/98960758162?pwd=aHJJc3pDS3FONkZIc2FoZ0hqcXU1dz09
Apr 22 @ 10:00 am – 11:00 am
SCIT Quarterly Seminar @ Zoom: https://stanford.zoom.us/j/98960758162?pwd=aHJJc3pDS3FONkZIc2FoZ0hqcXU1dz09
“Tumor-Immune Interactions in TNBC Brain Metastases”
Maxine Umeh Garcia, PhD

ABSTRACT: It is estimated that metastasis is responsible for 90% of cancer deaths, with 1 in every 2 advanced staged triple-negative breast cancer patients developing brain metastases – surviving as little as 4.9 months after metastatic diagnosis. My project hypothesizes that the spatial architecture of the tumor microenvironment reflects distinct tumor-immune interactions that are driven by receptor-ligand pairing; and that these interactions not only impact tumor progression in the brain, but also prime the immune system (early on) to be tolerant of disseminated cancer cells permitting brain metastases. The main goal of my project is to build a model that recapitulates tumor-immune interactions in brain-metastatic triple-negative breast cancer, and use this model to identify novel druggable targets to improve survival outcomes in patients with devastating brain metastases.

“Classification of Malignant and Benign Peripheral Nerve Sheath Tumors With An Open Source Feature Selection Platform”
Michael Zhang, MD

ABSTRACT: Radiographic differentiation of malignant peripheral nerve sheath tumors (MPNSTs) from benign PNSTs is a diagnostic challenge. The former is associated with a five-year survival rate of 30-50%, and definitive management requires gross total surgical with wide negative margins in areas of sensitive neurologic function. This presentation describes a radiomics approach to pre-operatively identifying a diagnosis, thereby possibly avoiding surgical complexity and debilitating symptoms. Using an open-source, feature extraction platform and machine learning, we produce a radiographic signature for MPNSTs based on routine MRI.

IBIIS/AIMI Seminar – Tiwari @ ZOOM - See Description for Zoom link
Apr 22 @ 1:00 pm – 2:00 pm
IBIIS/AIMI Seminar - Tiwari @ ZOOM - See Description for Zoom link

Radiomics and Radio-Genomics: Opportunities for Precision Medicine

Zoom: https://stanford.zoom.us/j/99904033216?pwd=U2tTdUp0YWtneTNUb1E4V2x0OTFMQT09 

Pallavi Tiwari, PhD
Assistant Professor of Biomedical Engineering
Associate Member, Case Comprehensive Cancer Center
Director of Brain Image Computing Laboratory
School of Medicine | Case Western Reserve University


Abstract:
In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on clinical applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude with a discussion on her lab’s findings in AI + experts, in the context of a clinically challenging problem of post-treatment response assessment on routine MRI scans.

Aug
5
Wed
2020
AIMI Symposium @ Livestream: details to come
Aug 5 @ 8:30 am – 4:30 pm
AIMI Symposium @ Livestream: details to come

Location & Timing

August 5, 2020
8:30am-4:30pm
Livestream: details to come

This event is free and open to all!
Registration and Event details

Overview
Advancements of machine learning and artificial intelligence into all areas of medicine are now a reality and they hold the potential to transform healthcare and open up a world of incredible promise for everyone. Sponsored by the Stanford Center for Artificial Intelligence in Medicine and Imaging, the 2020 AIMI Symposium is a virtual conference convening experts from Stanford and beyond to advance the field of AI in medicine and imaging. This conference will cover everything from a survey of the latest machine learning approaches, many use cases in depth, unique metrics to healthcare, important challenges and pitfalls, and best practices for designing building and evaluating machine learning in healthcare applications.

Our goal is to make the best science accessible to a broad audience of academic, clinical, and industry attendees. Through the AIMI Symposium we hope to address gaps and barriers in the field and catalyze more evidence-based solutions to improve health for all.

Sep
16
Wed
2020
IBIIS & AIMI Seminar – Judy Gichoya, MD @ Zoom - See Description for Zoom Link
Sep 16 @ 12:00 pm – 1:00 pm
IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link

Judy Gichoya, MD
Assistant Professor
Emory University School of Medicine

Measuring Learning Gains in Man-Machine Assemblage When Augmenting Radiology Work with Artificial Intelligence

Abstract
The work setting of the future presents an opportunity for human-technology partnerships, where a harmonious connection between human-technology produces unprecedented productivity gains. A conundrum at this human-technology frontier remains – will humans be augmented by technology or will technology be augmented by humans? We present our work on overcoming the conundrum of human and machine as separate entities and instead, treats them as an assemblage. As groundwork for the harmonious human-technology connection, this assemblage needs to learn to fit synergistically. This learning is called assemblage learning and it will be important for Artificial Intelligence (AI) applications in health care, where diagnostic and treatment decisions augmented by AI will have a direct and significant impact on patient care and outcomes. We describe how learning can be shared between assemblages, such that collective swarms of connected assemblages can be created. Our work is to demonstrate a symbiotic learning assemblage, such that envisioned productivity gains from AI can be achieved without loss of human jobs.

Specifically, we are evaluating the following research questions: Q1: How to develop assemblages, such that human-technology partnerships produce a “good fit” for visually based cognition-oriented tasks in radiology? Q2: What level of training should pre-exist in the individual human (radiologist) and independent machine learning model for human-technology partnerships to thrive? Q3: Which aspects and to what extent does an assemblage learning approach lead to reduced errors, improved accuracy, faster turn-around times, reduced fatigue, improved self-efficacy, and resilience?

Zoom: https://stanford.zoom.us/j/93580829522?pwd=ZVAxTCtEdkEzMWxjSEQwdlp0eThlUT09

Oct
21
Wed
2020
SCIT Quarterly Seminar @ See description for ZOOM link
Oct 21 @ 10:00 am – 11:00 am

ZOOM LINK HERE

“High Resolution Breast Diffusion Weighted Imaging”
Jessica McKay, PhD

ABSTRACT: Diffusion-weighted imaging (DWI) is a quantitative MRI method that measures the apparent diffusion coefficient (ADC) of water molecules, which reflects cell density and serves as an indication of malignancy. Unfortunately, however, the clinical value of DWI is severely limited by the undesirable features in images that common clinical methods produce, including large geometric distortions, ghosting and chemical shift artifacts, and insufficient spatial resolution. Thus, in order to exploit information encoded in diffusion characteristics and fully assess the clinical value of ADC measurements, it is first imperative to achieve technical advancements of DWI.

In this talk, I will largely focus on the background of breast DWI, providing the clinical motivation for this work and explaining the current standard in breast DWI and alternatives proposed throughout the literature. I will also present my PhD dissertation work in which a novel strategy for high resolution breast DWI was developed. The purpose of this work is to improve DWI methods for breast imaging at 3 Tesla to robustly provide diffusion-weighted images and ADC maps with anatomical quality and resolution. This project has two major parts: Nyquist ghost correction and the use of simultaneous multislice imaging (SMS) to achieve high resolution. Exploratory work was completed to characterize the Nyquist ghost in breast DWI, showing that, although the ghost is mostly linear, the three-line navigator is unreliable, especially in the presence of fat. A novel referenceless ghost correction, Ghost/Object minimization was developed that reduced the ghost in standard SE-EPI and advanced SMS. An advanced SMS method with axial reformatting (AR) is presented for high resolution breast DWI. In a reader study, AR-SMS was preferred by three breast radiologists compared to the standard SE-EPI and readout-segmented-EPI.


“Machine-learning Approach to Differentiation of Benign and Malignant Peripheral Nerve Sheath Tumors: A Multicenter Study”

Michael Zhang, MD

ABSTRACT: Clinicoradiologic differentiation between benign and malignant peripheral nerve sheath tumors (PNSTs) is a diagnostic challenge with important management implications. We sought to develop a radiomics classifier based on 900 features extracted from gadolinium-enhanced, T1-weighted MRI, using the Quantitative Imaging Feature Pipeline and the PyRadiomics package. Additional patient-specific clinical variables were recorded. A radiomic signature was derived from least absolute shrinkage and selection operator, followed by gradient boost machine learning. A training and test set were selected randomly in a 70:30 ratio. We further evaluated the performance of radiomics-based classifier models against human readers of varying medical-training backgrounds. Following image pre-processing, 95 malignant and 171 benign PNSTs were available. The final classifier included 21 features and achieved a sensitivity 0.676, specificity 0.882, and area under the curve (AUC) 0.845. Collectively, human readers achieved sensitivity 0.684, specificity 0.742, and AUC 0.704. We concluded that radiomics using routine gadolinium enhanced, T1-weighted MRI sequences and clinical features can aid in the evaluation of PNSTs, particularly by increasing specificity for diagnosing malignancy. Further improvement may be achieved with incorporation of additional imaging sequences.

Nov
18
Wed
2020
IBIIS & AIMI Seminar: Deep Tomographic Imaging @ Zoom: https://stanford.zoom.us/j/96731559276?pwd=WG5zcEFwSGlPcDRsOUFkVlRhcEs2Zz09
Nov 18 @ 12:00 pm – 1:00 pm

Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York

Abstract:
AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.

About:
AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.

Jul
16
Fri
2021
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu
Jul 16 @ 12:00 pm – 1:00 pm
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu

Radiology Department-Wide Research Meeting

• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging

Location: Zoom – Details can be found here: https://radresearch.stanford.edu
Meetings will be the 3rd Friday of each month.

 

Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology

Aug
3
Tue
2021
2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream
Aug 3 @ 8:00 am – Aug 4 @ 3:00 pm
2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream

Stanford AIMI Director Curt Langlotz and Co-Directors Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on the latest, best research on the role of AI in diagnostic excellence across medicine, current areas of impact, fairness and societal impact, and translation and clinical implementation. The program includes talks, interactive panel discussions, and breakout sessions. Registration is free and open to all.

 

Also, the 2nd Annual BiOethics, the Law, and Data-sharing: AI in Radiology (BOLD-AIR) Summit will be held on August 4, in conjunction with the AIMI Symposium. The summit will convene a broad range of speakers in bioethics, law, regulation, industry groups, and patient safety and data privacy, to address the latest ethical, regulatory, and legal challenges regarding AI in radiology.

 

REGISTER HERE