Calendar

Apr
22
Wed
2020
IBIIS/AIMI Seminar - Tiwari @ ZOOM - See Description for Zoom link
IBIIS/AIMI Seminar – Tiwari
Apr 22 @ 1:00 pm – 2:00 pm ZOOM - See Description for Zoom link
IBIIS/AIMI Seminar - Tiwari @ ZOOM - See Description for Zoom link

Radiomics and Radio-Genomics: Opportunities for Precision Medicine

Zoom: https://stanford.zoom.us/j/99904033216?pwd=U2tTdUp0YWtneTNUb1E4V2x0OTFMQT09 

Pallavi Tiwari, PhD
Assistant Professor of Biomedical Engineering
Associate Member, Case Comprehensive Cancer Center
Director of Brain Image Computing Laboratory
School of Medicine | Case Western Reserve University


Abstract:
In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on clinical applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude with a discussion on her lab’s findings in AI + experts, in the context of a clinically challenging problem of post-treatment response assessment on routine MRI scans.

Aug
5
Wed
2020
AIMI Symposium @ Livestream: details to come
AIMI Symposium
Aug 5 @ 8:30 am – 4:30 pm Livestream: details to come
AIMI Symposium @ Livestream: details to come

Location & Timing

August 5, 2020
8:30am-4:30pm
Livestream: details to come

This event is free and open to all!
Registration and Event details

Overview
Advancements of machine learning and artificial intelligence into all areas of medicine are now a reality and they hold the potential to transform healthcare and open up a world of incredible promise for everyone. Sponsored by the Stanford Center for Artificial Intelligence in Medicine and Imaging, the 2020 AIMI Symposium is a virtual conference convening experts from Stanford and beyond to advance the field of AI in medicine and imaging. This conference will cover everything from a survey of the latest machine learning approaches, many use cases in depth, unique metrics to healthcare, important challenges and pitfalls, and best practices for designing building and evaluating machine learning in healthcare applications.

Our goal is to make the best science accessible to a broad audience of academic, clinical, and industry attendees. Through the AIMI Symposium we hope to address gaps and barriers in the field and catalyze more evidence-based solutions to improve health for all.

Sep
16
Wed
2020
IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link
IBIIS & AIMI Seminar – Judy Gichoya, MD
Sep 16 @ 12:00 pm – 1:00 pm Zoom - See Description for Zoom Link
IBIIS & AIMI Seminar - Judy Gichoya, MD @ Zoom - See Description for Zoom Link

Judy Gichoya, MD
Assistant Professor
Emory University School of Medicine

Measuring Learning Gains in Man-Machine Assemblage When Augmenting Radiology Work with Artificial Intelligence

Abstract
The work setting of the future presents an opportunity for human-technology partnerships, where a harmonious connection between human-technology produces unprecedented productivity gains. A conundrum at this human-technology frontier remains – will humans be augmented by technology or will technology be augmented by humans? We present our work on overcoming the conundrum of human and machine as separate entities and instead, treats them as an assemblage. As groundwork for the harmonious human-technology connection, this assemblage needs to learn to fit synergistically. This learning is called assemblage learning and it will be important for Artificial Intelligence (AI) applications in health care, where diagnostic and treatment decisions augmented by AI will have a direct and significant impact on patient care and outcomes. We describe how learning can be shared between assemblages, such that collective swarms of connected assemblages can be created. Our work is to demonstrate a symbiotic learning assemblage, such that envisioned productivity gains from AI can be achieved without loss of human jobs.

Specifically, we are evaluating the following research questions: Q1: How to develop assemblages, such that human-technology partnerships produce a “good fit” for visually based cognition-oriented tasks in radiology? Q2: What level of training should pre-exist in the individual human (radiologist) and independent machine learning model for human-technology partnerships to thrive? Q3: Which aspects and to what extent does an assemblage learning approach lead to reduced errors, improved accuracy, faster turn-around times, reduced fatigue, improved self-efficacy, and resilience?

Zoom: https://stanford.zoom.us/j/93580829522?pwd=ZVAxTCtEdkEzMWxjSEQwdlp0eThlUT09

Oct
21
Wed
2020
SCIT Quarterly Seminar @ See description for ZOOM link
SCIT Quarterly Seminar
Oct 21 @ 10:00 am – 11:00 am See description for ZOOM link

ZOOM LINK HERE

“High Resolution Breast Diffusion Weighted Imaging”
Jessica McKay, PhD

ABSTRACT: Diffusion-weighted imaging (DWI) is a quantitative MRI method that measures the apparent diffusion coefficient (ADC) of water molecules, which reflects cell density and serves as an indication of malignancy. Unfortunately, however, the clinical value of DWI is severely limited by the undesirable features in images that common clinical methods produce, including large geometric distortions, ghosting and chemical shift artifacts, and insufficient spatial resolution. Thus, in order to exploit information encoded in diffusion characteristics and fully assess the clinical value of ADC measurements, it is first imperative to achieve technical advancements of DWI.

In this talk, I will largely focus on the background of breast DWI, providing the clinical motivation for this work and explaining the current standard in breast DWI and alternatives proposed throughout the literature. I will also present my PhD dissertation work in which a novel strategy for high resolution breast DWI was developed. The purpose of this work is to improve DWI methods for breast imaging at 3 Tesla to robustly provide diffusion-weighted images and ADC maps with anatomical quality and resolution. This project has two major parts: Nyquist ghost correction and the use of simultaneous multislice imaging (SMS) to achieve high resolution. Exploratory work was completed to characterize the Nyquist ghost in breast DWI, showing that, although the ghost is mostly linear, the three-line navigator is unreliable, especially in the presence of fat. A novel referenceless ghost correction, Ghost/Object minimization was developed that reduced the ghost in standard SE-EPI and advanced SMS. An advanced SMS method with axial reformatting (AR) is presented for high resolution breast DWI. In a reader study, AR-SMS was preferred by three breast radiologists compared to the standard SE-EPI and readout-segmented-EPI.


“Machine-learning Approach to Differentiation of Benign and Malignant Peripheral Nerve Sheath Tumors: A Multicenter Study”

Michael Zhang, MD

ABSTRACT: Clinicoradiologic differentiation between benign and malignant peripheral nerve sheath tumors (PNSTs) is a diagnostic challenge with important management implications. We sought to develop a radiomics classifier based on 900 features extracted from gadolinium-enhanced, T1-weighted MRI, using the Quantitative Imaging Feature Pipeline and the PyRadiomics package. Additional patient-specific clinical variables were recorded. A radiomic signature was derived from least absolute shrinkage and selection operator, followed by gradient boost machine learning. A training and test set were selected randomly in a 70:30 ratio. We further evaluated the performance of radiomics-based classifier models against human readers of varying medical-training backgrounds. Following image pre-processing, 95 malignant and 171 benign PNSTs were available. The final classifier included 21 features and achieved a sensitivity 0.676, specificity 0.882, and area under the curve (AUC) 0.845. Collectively, human readers achieved sensitivity 0.684, specificity 0.742, and AUC 0.704. We concluded that radiomics using routine gadolinium enhanced, T1-weighted MRI sequences and clinical features can aid in the evaluation of PNSTs, particularly by increasing specificity for diagnosing malignancy. Further improvement may be achieved with incorporation of additional imaging sequences.

Nov
18
Wed
2020
IBIIS & AIMI Seminar: Deep Tomographic Imaging @ Zoom: https://stanford.zoom.us/j/96731559276?pwd=WG5zcEFwSGlPcDRsOUFkVlRhcEs2Zz09
IBIIS & AIMI Seminar: Deep Tomographic Imaging
Nov 18 @ 12:00 pm – 1:00 pm Zoom: https://stanford.zoom.us/j/96731559276?pwd=WG5zcEFwSGlPcDRsOUFkVlRhcEs2Zz09

Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York

Abstract:
AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.

About:
AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.

Apr
30
Fri
2021
Racial Equity Challenge: Race in society @ Zoom
Racial Equity Challenge: Race in society
Apr 30 @ 12:00 pm – 1:00 pm Zoom
Racial Equity Challenge: Race in society @ Zoom

Targeted violence continues against Black Americans, Asian Americans, and all people of color. The department of radiology diversity committee is running a racial equity challenge to raise awareness of systemic racism, implicit bias and related issues. Participants will be provided a list of resources on these topics such as articles, podcasts, videos, etc., from which they can choose, with the “challenge” of engaging with one to three media sources prior to our session (some videos are as short as a few minutes). Participants will meet in small-group breakout sessions to discuss what they’ve learned and share ideas.

Please reach out to Marta Flory, flory@stanford.edu with questions. For details about the session, including recommended resources and the Zoom link, please reach out to Meke Faaoso at mfaaoso@stanford.edu.

Jul
16
Fri
2021
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu
Radiology-Wide Research Conference
Jul 16 @ 12:00 pm – 1:00 pm Zoom – Details can be found here: https://radresearch.stanford.edu
Radiology-Wide Research Conference @ Zoom – Details can be found here: https://radresearch.stanford.edu

Radiology Department-Wide Research Meeting

• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging

Location: Zoom – Details can be found here: https://radresearch.stanford.edu
Meetings will be the 3rd Friday of each month.

 

Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology

Aug
3
Tue
2021
2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream
2021 AIMI Symposium + BOLD-AIR Summit
Aug 3 @ 8:00 am – Aug 4 @ 3:00 pm Virtual Livestream
2021 AIMI Symposium + BOLD-AIR Summit @ Virtual Livestream

Stanford AIMI Director Curt Langlotz and Co-Directors Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on the latest, best research on the role of AI in diagnostic excellence across medicine, current areas of impact, fairness and societal impact, and translation and clinical implementation. The program includes talks, interactive panel discussions, and breakout sessions. Registration is free and open to all.

 

Also, the 2nd Annual BiOethics, the Law, and Data-sharing: AI in Radiology (BOLD-AIR) Summit will be held on August 4, in conjunction with the AIMI Symposium. The summit will convene a broad range of speakers in bioethics, law, regulation, industry groups, and patient safety and data privacy, to address the latest ethical, regulatory, and legal challenges regarding AI in radiology.

 

REGISTER HERE

Sep
22
Wed
2021
IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Models for Cancer Risk Assessment @ Zoom: https://stanford.zoom.us/j/99474772502?pwd=NEQrQUQ0MzdtRjFiYU42TCs2bFZsUT09
IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Models for Cancer Risk Assessment
Sep 22 @ 11:00 am – 12:00 pm Zoom: https://stanford.zoom.us/j/99474772502?pwd=NEQrQUQ0MzdtRjFiYU42TCs2bFZsUT09

 

Regina Barzilay, PhD
School of Engineering Distinguished Professor for AI and Health
Electrical Engineering and Computer Science Department
AI Faculty Lead at Jameel Clinic for Machine Learning in Health
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology

Abstract:
In this talk, I will present methods for future cancer risk from medical images. The discussion will explore alternative ways to formulate the risk assessment task and focus on algorithmic issues in developing such models. I will also discuss our experience in translating these algorithms into clinical practice in hospitals around the world.

Sep
27
Mon
2021
2021 IBIIS & AIMI Virtual Retreat
Sep 27 @ 1:00 pm – 4:30 pm https://ibiis.stanford.edu/events/retreat/2021Hybrid.html

Keynote:

Self-Supervision for Learning from the Bottom Up

Why do self-supervised learning? A common answer is: “because data labeling is expensive.” In this talk, I will argue that there are other, perhaps more fundamental reasons for working on self-supervision. First, it should allow us to get away from the tyranny of top-down semantic categorization and force meaningful associations to emerge naturally from the raw sensor data in a bottom-up fashion. Second, it should allow us to ditch fixed datasets and enable continuous, online learning, which is a much more natural setting for real-world agents. Third, and most intriguingly, there is hope that it might be possible to force a self-supervised task curriculum to emerge from first principles, even in the absence of a pre-defined downstream task or goal, similar to evolution. In this talk, I will touch upon these themes to argue that, far from running its course, research in self-supervised learning is only just beginning.