J
oin us for the 11th biennial International Conference on Functional
Imaging and Modeling of the Heart (FIMH). FIMH-2021 will celebrate 20
years of bringing together friends\, colleagues\, and collaborators to sha
re and discuss the latest in cardiac and cardiovascular imaging\, electrop
hysiology\, computational modeling\, and translational applications. The e
vent will take place June 21-25\, 2021 virtually\, via Livestream\, Zoom m
eeting workshops\, and Spatial Chat networking.
\n
\n
Sponsored by: Functional Imaging
and Modeling of the Heart Conference
DTSTART;VALUE=DATE:20210621
DTEND;VALUE=DATE:20210626
LOCATION:Virtual Event
SEQUENCE:0
SUMMARY:International Conference on Functional Imaging and Modeling of the
Heart
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/internat
ional-conference-on-functional-imaging-and-modeling-of-the-heart/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/FIMH_sidebar_logo-150x150.jpg\;150\;15
0\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-
content/uploads/2019/10/FIMH_sidebar_logo.jpg\;300\;300\;
X-TICKETS-URL:https://www.eventbrite.com/e/fimh-2021-registration-142940529
973?aff=RadiologyExternalCalendar
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2677@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:Canary Center
CONTACT:Ashley Williams\; ashleylw@stanford.edu\; https://www.earlydetectio
nresearch.com/
DESCRIPTION:
Cancer Resear
ch UK\, OHSU Knight Cancer Institute and the Canary Center at Stanford\, p
resent the Early Detection of Cancer Conference series. The annual Confere
nce brings together experts in early detection from multiple disciplines t
o share ground breaking research and progress in the field.
\n
The Co
nference is part of a long-term commitment to invest in early detection re
search\, to understand the biology behind early stage cancers\, find new d
etection and screening methods\, and enhance uptake and accuracy of screen
ing.
\n
The 2021 conference will take place October 6-8 virtu
ally. For more information visit the website: http://earlydetectionresearch.com/
strong>
Targeted violence co
ntinues against Black Americans\, Asian Americans\, and all people of colo
r. The department of radiology diversity committee is running a racial equ
ity challenge to raise awareness of systemic racism\, implicit bias and re
lated issues. Participants will be provided a list of resources on these t
opics such as articles\, podcasts\, videos\, etc.\, from which they can ch
oose\, with the “challenge” of engaging with one to three media sources pr
ior to our session (some videos are as short as a few minutes). Participan
ts will meet in small-group breakout sessions to discuss what they’ve lear
ned and share ideas.
\n
Please reach out to Marta Flory\, flory@stanford.edu with questions. For detail
s about the session\, including recommended resources and the Zoom link\,
please reach out to Meke Faaoso at m
faaoso@stanford.edu.
DTSTART;TZID=America/Los_Angeles:20210430T120000
DTEND;TZID=America/Los_Angeles:20210430T130000
LOCATION:Zoom
SEQUENCE:0
SUMMARY:Racial Equity Challenge: Race in society
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/racial-e
quity-challenge-race-in-society/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/04/shield-150x150.png\;150\;150\;1\,mediu
m\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/upl
oads/2021/04/shield.png\;225\;225\;
X-TICKETS-URL:https://docs.google.com/spreadsheets/d/1ehKqHm32peHcm7NQJ427O
aKIa9JpfHVunjBk66etZGc/edit?usp=sharing
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-1757@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:Canary Center\,Early Cancer Detection Seminar Ser
ies
CONTACT:Ashley Williams\; ashleylw@stanford.edu\; https://canarycenter.stan
ford.edu/seminars.html
DESCRIPTION:
CEDSS: “Building a Scalable Clinical Genomics Program: How tumor\,
normal\, and plasma DNA sequencing are informing cancer care\, cancer ris
k\, and cancer detection”
Elizabeth and Felix Rohatyn Chair & Associate Director of the Ma
rie-Josée and Henry R. Kravis Center for Molecular Oncology \nMemoria
l Sloan Kettering Cancer Center
11:00am –
12:00pm Seminar & Discussion \nRSVP Here
\n<
p> \n
ABSTRACT \nTumor molecular profiling is
a fundamental component of precision oncology\, enabling the identificatio
n of oncogenomic mutations that can be targeted therapeutically. To accele
rate enrollment to clinical trials of molecularly targeted agents and guid
e treatment selection\, we have established a center-wide\, prospective cl
inical sequencing program at Memorial Sloan Kettering Cancer Center using
a custom\, paired tumor-blood normal sequencing assay (MSK-IMPACT)\, which
we have used to profile more than 50\,000 patients with solid tumors. Yet
beyond just the characterization of tumor-specific alterations\, the incl
usion of blood DNA has readily enabled the identification of germline risk
alleles and somatic mutations associated with clonal hematopoiesis. To co
mplement this approach\, we have also implemented a ‘liquid biopsy’ cfDNA
panel (MSK-ACCESS) for cancer detection\, surveillance\, and treatment sel
ection and monitoring. In my talk\, I will describe the prevalence of soma
tic and germline genomic alterations in a real-world population\, the clin
ical benefits of cfDNA assessment\, and how clonal hematopoiesis can infor
m cancer risk and confound liquid biopsy approaches to cancer detection.
p>\n
\n
ABOUT \nMichael Berger\, PhD\, hold
s the Elizabeth and Felix Rohatyn Chair and is Associate Director of the M
arie-Josée and Henry R. Kravis Center for Molecular Oncology at Memorial S
loan Kettering Cancer Center\, a multidisciplinary initiative to promote p
recision oncology through genomic analysis to guide the diagnosis and trea
tment of cancer patients. He is also an Associate Attending Geneticist in
the Department of Pathology with expertise in cancer genomics\, computatio
nal biology\, and high-throughput DNA sequencing technology. His laborator
y is developing experimental and computational methods to characterize the
genetic makeup of individual cancers and identify genomic biomarkers of d
rug response and resistance. As Scientific Director of Clinical NGS in the
Molecular Diagnostics Service\, he oversees the development and bioinform
atics associated with clinical sequencing assays\, and he helped lead the
development and implementation of MSK-IMPACT\, a comprehensive FDA-authori
zed tumor sequencing panel that been used to profile more than 60\,000 tum
ors from advanced cancer patients at MSK. The resulting data have enabled
the characterization of somatic and germline biomarkers across many cancer
types and the identification of mutations associated with clonal hematopo
iesis. Dr. Berger also led the development of a clinically validated plasm
a cell-free DNA assay\, MSK-ACCESS\, which his laboratory is using to expl
ore tumor evolution\, acquired drug resistance\, and occult metastatic dis
ease. He received his Bachelor’s Degree in Physics from Princeton Universi
ty and his Ph.D. in Biophysics from Harvard University.
\n
\n
Hosted by: Utkan Demirci\, Ph.D. \nSponso<
em>red by: The Canary Center & the Department of Radiolo
gy \nStanford University – School of Medicine
\n
Tickets: https://stanf
ord.zoom.us/webinar/register/5516153318622/WN_MT7TTEciRoWmLVP9GlsJRA.<
/p>
DTSTART;TZID=America/Los_Angeles:20210511T110000
DTEND;TZID=America/Los_Angeles:20210511T120000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:Cancer Early Detection Seminar Series – Michael Berger\, Ph.D.
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/cancer-e
arly-detection-seminar-series-michael-f-berger-ph-d/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/berger_160204_07-2_3x2-150x150.jpg\;15
0\;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalenda
r/wp-content/uploads/2019/10/berger_160204_07-2_3x2-300x200.jpg\;300\;200\
;1\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-con
tent/uploads/2019/10/berger_160204_07-2_3x2.jpg\;600\;400\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/5516153318622/WN_MT
7TTEciRoWmLVP9GlsJRA
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2603@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:IMMERS Series\,SMMR
CONTACT:Steffi Perkins\; slp979@stanford.edu\; https://med.stanford.edu/imm
ers/smmr.html
DESCRIPTION:
Join us
for a panel on Behavioral XR on Thursday\, June 3rd from 9:00 –
10:30 am PDT.The event will start with a one-hour panel disc
ussion featuring Dr. Elizabeth
McMahon\, a psychologist with a private practice in California\; Sarah Hill of Healium\, a company developing XR apps for
mental fitness based in Missouri\; Christian Angern of Sympat
ient\, a company developing VR for anxiety therapy based in Germany\;
and Marguerite
Manteau-Rao of Penumbra\, a
medical device company based in California. This panel will be moderated
by Dr. Walt
er Greenleaf of Stanford’s Virtua
l Human Interaction Lab (VHIL) and Dr. Christoph Leuze of the Stanford Medical Mixed Reality (SMMR) pro
gram. Immediately following the panel discussion\, you are also invited t
o a 30-minute interactive session with the panelists where questions and i
deas can be explored in real time.
\n
\n
Reg
ister here to save your place now! After registering\, you will r
eceive a confirmation email containing information about joining the meeti
ng.
\n
\n
Please visit thi
s page to subscribe to our events mailing list.
\n
\n
Sponsored by Stanford Medical Mixed Reality (SMMR)
\n
Tickets: <
a class='ai1ec-ticket-url-exported' href='https://stanford.zoom.us/meeting
/register/tJEvf-ioqTwvHNC2DABwGFESBe71rC6G6qV-'>https://stanford.zoom.us/m
eeting/register/tJEvf-ioqTwvHNC2DABwGFESBe71rC6G6qV-.
• Research Announcements \n• Mirabela Rusu\,
PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the
Gap between Digital Pathologists and Digital Radiologists \n• Akshay
Chaudhari\, PhD – Data-Efficient Machine Learning for Medical Imaging
Hosted by: Kawin Setsompop\, Ph
D \nSponsored by: the the Department of Radiology
\n
\n
DTSTART;TZID=America/Los_Angeles:20210716T120000
DTEND;TZID=America/Los_Angeles:20210716T130000
LOCATION:Zoom – Details can be found here: https://radresearch.stanford.edu
SEQUENCE:0
SUMMARY:Radiology-Wide Research Conference
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/radiolog
y-wide-research-conference/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/07/RWRC-July-150x150.jpeg\;150\;150\;1\,m
edium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content
/uploads/2021/07/RWRC-July-300x195.jpeg\;300\;195\;1\,large\;http://web.st
anford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2021/07/RWR
C-July.jpeg\;443\;288\;
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2809@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI\,Annual Conferences
CONTACT:AIMI Center\; aimicenter@stanford.edu\; https://aimi.stanford.edu/n
ews-events/aimi-symposium/overview
DESCRIPTION:
Stanford AIMI Director Curt Langlotz and Co-Directors
Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford C
enter for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on
the latest\, best research on the role of AI in diagnostic excellence acro
ss medicine\, current areas of impact\, fairness and societal impact\, and
translation and clinical implementation. The program includes talks\, int
eractive panel discussions\, and breakout sessions. Registration is free a
nd open to all.
\n
\n
Also\, the 2nd Annual BiOethics\, the Law\, and Data-sharing: AI in Radiology (BOLD-AI
R) Summit will be held on August 4\,
in conjunction with the AIMI Symposium. The summit will convene a broad r
ange of speakers in bioethics\, law\, regulation\, industry groups\, and p
atient safety and data privacy\, to address the latest ethical\, regulator
y\, and legal challenges regarding AI in radiology.
Regina Barzilay\, PhD \nScho
ol of Engineering Distinguished Professor for AI and Health \nElectri
cal Engineering and Computer Science Department \nAI Faculty Lead at
Jameel Clinic for Machine Learning in Health \nComputer Science and A
rtificial Intelligence Lab \nMassachusetts Institute of Technology
\n
Abstract: \nIn this talk\, I will present meth
ods for future cancer risk from medical images. The discussion will explor
e alternative ways to formulate the risk assessment task and focus on algo
rithmic issues in developing such models. I will also discuss our experien
ce in translating these algorithms into clinical practice in hospitals aro
und the world.
DTSTART;TZID=America/Los_Angeles:20210922T110000
DTEND;TZID=America/Los_Angeles:20210922T120000
LOCATION:Zoom: https://stanford.zoom.us/j/99474772502?pwd=NEQrQUQ0MzdtRjFiY
U42TCs2bFZsUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Model
s for Cancer Risk Assessment
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-seeing-the-future-from-images-ml-based-models-for-cancer-risk-a
ssessment/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/08/regina-300x300.jpeg\;300\;300\,medium\
;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploa
ds/2021/08/regina-300x300.jpeg\;300\;300\,large\;http://web.stanford.edu/g
roup/radweb/cgi-bin/radcalendar/wp-content/uploads/2021/08/regina-300x300.
jpeg\;300\;300\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radcale
ndar/wp-content/uploads/2021/08/regina-300x300.jpeg\;300\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2993@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/retreat/2021Hybrid.html
DESCRIPTION:
Keynote:
\n
Self-Supervision for Learning from the Bot
tom Up
\n
Why do self-supervised learning? A common answer is: “beca
use data labeling is expensive.” In this talk\, I will argue that there ar
e other\, perhaps more fundamental reasons for working on self-supervision
. First\, it should allow us to get away from the tyranny of top-down sema
ntic categorization and force meaningful associations to emerge naturally
from the raw sensor data in a bottom-up fashion. Second\, it should allow
us to ditch fixed datasets and enable continuous\, online learning\, which
is a much more natural setting for real-world agents. Third\, and most in
triguingly\, there is hope that it might be possible to force a self-super
vised task curriculum to emerge from first principles\, even in the absenc
e of a pre-defined downstream task or goal\, similar to evolution. In this
talk\, I will touch upon these themes to argue that\, far from running it
s course\, research in self-supervised learning is only just beginning.
C
ancer research continues to be predicated on a 1970’s model of research an
d treatment. Despite half a century of intense research\, we are failing s
pectacularly to improve the outcome for patients with advanced disease. Th
ose who are cured continue to be treated mostly with the older strategies
(surgery-chemo-radiation). Our contention is that the real solution to the
cancer problem is to diagnose cancer early\, at the stage of The First Ce
ll. The rapidly evolving technologies are doing much in this area but need
to be expanded. We study a pre-leukemic condition called myelodysplastic
syndrome (MDS) with the hope that we can detect the first leukemia cells a
s the disease transforms to acute myeloid leukemia (AML). Towards this end
\, we have collected blood and bone marrow samples on MDS and AML patients
since 1984. Today\, our Tissue Repository has more than 60\,000 samples.
We propose novel methods to identify surrogate markers that can identify t
he First Cell through studying the serial samples of patients who evolve f
rom MDS to AML.
\n
\n
ABOUT
\n
Dr. Raza
is a Professor of Medicine and Director of the MDS Center at Columbia Univ
ersity in New York\, NY.She started her research in Myelodisplastic Syndro
mes (MDS) in 1982 and moved to Rush University\, Chicago\, Illinois in 199
2\, where she was the Charles Arthur Weaver Professor in Oncology and Dire
ctor\, Division of Myeloid Diseases. The MDS Program\, along with a Tissue
Repository containing more than 50\,000 samples from MDS and acute leukem
ia patients was successfully relocated to the University of Massachusetts
in 2004 and to Columbia University in 2010.
\n
Before moving to New Y
ork\, Dr. Raza was the Chief of Hematology Oncology and the Gladys Smith M
artin Professor of Oncology at the University of Massachussetts in Worcest
er. She has published the results of her laboratory research and clinical
trials in prestigious\, peer reviewed journals such as The New England Jou
rnal of Medicine\, Nature\, Blood\, Cancer\, Cancer Research\, British Jou
rnal of Hematology\, Leukemia\, and Leukemia Research. Dr. Raza serves on
numerous national and international panels as a reviewer\, consultant and
advisor and is the recipient of a number of awards.
\n
\n
Hosted by: Utkan Demirci\, Ph.D. \nSponsor
ed by: The Canary Center & the Department of Radiology <
/em> \nStanford University – School of Medicine
Saeed Hassanpour\, PhD \nAssociate
Professor of Biomedical Data Science \nAssociate Professor of Epidem
iology \nAssociate Professor of Computer Science \nDartmouth Gei
sel School of Medicine
\n
Deep Learning for Histology Images
Analysis
\n
Abstract: \nWith the recen
t expansions of whole-slide digital scanning\, archiving\, and high-throug
hput tissue banks\, the field of digital pathology is primed to benefit si
gnificantly from deep learning technology. This talk will cover several ap
plications of deep learning for characterizing histopathological patterns
on high-resolution microscopy images for cancerous and precancerous lesion
s. Furthermore\, the current challenges for building deep learning models
for pathology image analysis will be discussed and new methodological adva
nces to address these bottlenecks will be presented.
\n
About
:
\n
Dr. Saeed Hassanpour is an Associate Professor in the D
epartments of Biomedical Data Science\, Computer Science\, and Epidemiolog
y at Dartmouth College. His research is focused on machine learning and mu
ltimodal data analysis for precision health. Dr. Hassanpour has led multip
le NIH-funded research projects\, which resulted in novel machine learning
and deep learning models for medical image analysis and clinical text min
ing to improve diagnosis\, prognosis\, and personalized therapies. Before
joining Dartmouth\, he worked as a Research Engineer at Microsoft. Dr. Has
sanpour received his Ph.D. in Electrical Engineering with a minor in Biome
dical Informatics from Stanford University and completed his postdoctoral
training at Stanford Center for Artificial Intelligence in Medicine & Imag
ing.
Indrani Bhattacharya\, PhD \nPostdoctoral Research Fellow \nDepartment of Radiology \nStanfo
rd University
\n
Title: Multimodal Data Fusion for S
elective Identification of Aggressive and Indolent Prostate Cancer on Magn
etic Resonance Imaging
\n
Abstract: Automated method
s for detecting prostate cancer and distinguishing indolent from aggressiv
e disease on Magnetic Resonance Imaging (MRI) could assist in early diagno
sis and treatment planning. Existing automated methods of prostate cancer
detection mostly rely on ground truth labels with limited accuracy\, ignor
e disease pathology characteristics observed on resected tissue\, and cann
ot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleas
on Pattern=3) cancers when they co-exist in mixed lesions. This talk will
cover multimodal and multi-scale fusion approaches to integrate radiology
images\, pathology images\, and clinical domain knowledge about prostate c
ancer distribution to selectively identify and localize aggressive and ind
olent cancers on prostate MRI.
\n\n
Rogier van der Sluijs\, PhD \nPostd
octoral Research Fellow \nDepartment of Radiology \nStanford Uni
versity
\n
Title: Pretraining Neural Networks for Me
dical AI
\n
Abstract: Transfer learning has quickly
become standard practice for deep learning on medical images. Typically\,
practitioners repurpose existing neural networks and their corresponding w
eights to bootstrap model development. This talk will cover several method
s to pretrain neural networks for medical tasks. The current challenges fo
r pretraining neural networks in Radiology will be discussed and recent ad
vancements that address these bottlenecks will be highlighted.
Nina Kottler\, MD
\, MS \nAssociate Chief Medical Officer\, Clinical AI \nVP C
linical Operations \nRadiology Partners
\n
Abstract: \nWe have a call to action in healthcare – we need to drive val
ue. Artificial intelligence (AI)\, if deployed correctly\, can help accom
plish this lofty mission. In this discussion we will review the following
lessons learned in deploying radiology AI at scale: 4 unexpected benefit
s of implementing AI emergent finding triage\; the importance of investing
in AI radiologist education\; how “most” AI needs to be incorporated into
the radiologist workflow\; why a platform is required to deploy AI at sca
le and what a modern platform looks like\; how to use AI to add value to y
our data\; and\, as Dr. Curt Langlotz famously said\, why rads (practices)
who use AI will replace those who don’t (a depiction of what the role of
the radiologist might look like in a tech enabled future).
\n
Bio: \nDr. Kottler has been a practicing radiologist specia
lizing in emergency imaging for over 16 years. Combining her clinical exp
erience with a graduate degree in applied mathematics\, she has been using
technological innovation to drive value in radiology. As the first radio
logist to join Radiology Partners\, Dr. Kottler has held multiple leadersh
ip positions within her practice and is currently the associate Chief Medi
cal Officer for Clinical AI. Externally Dr. Kottler serves on multiple co
mmittees for the ACR\, RSNA\, and SIIM. Dr. Kottler is also passionate ab
out promoting diversity and creating a culture of belonging. As such she
is a member of the AAWR\, is a member of the diversity and inclusion commi
ttee at SIIM\, serves on the steering committee for RAD=\, and leads the e
ducation and development division of the Belonging Committee within Radiol
ogy Partners.
Spyridon (Spyros) Bakas\,
PhD \nAssistant Professor in the Department of Pathology\,
\nLaboratory Medicine\, and of Radiology \nCenter for Biomedical Imag
e Computing and Analytics (CBICA) \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Imaging Analytics for N
euro-Oncology: \nTowards Computational Diagnostics
\n
Ab
stract: Central nervous system (CNS) tumors come with vastly hete
rogeneous histologic\, molecular\, and radiographic landscapes\, rendering
their precise characterization challenging. The rapidly growing fields of
biophysical modeling and radiomics have shown promise in better character
izing the molecular\, spatial\, and temporal heterogeneity of tumors. Inte
grative analysis of CNS tumors\, including clinically acquired multi-param
etric magnetic resonance imaging (mpMRI)\, assists in identifying macrosco
pic quantifiable tumor patterns of invasion and proliferation\, potentiall
y leading to improved (a) detection/segmentation of tumor subregions and (
b) computer-aided diagnostic/prognostic/predictive modeling. This talk wil
l touch upon example studies on this space\, as well as an overview of the
largest to-date real-world federated learning study to detect brain tumor
boundaries.
Harini Veeraraghavan\, PhD \nAssociat
e Attending Computer Scientist \nDepartment of Medical Physics
\nMemorial Sloan-Kettering Cancer Center
\n
Using AI for Long
itudinal Tumor Response Monitoring and AI Guided Cancer Treatments: From L
ab to Clinic
\n
Abstract: \nCancer pat
ients are imaged with multiple imaging modalities as part of routine cance
r care. However\, the rich information available from the images are not f
ully exploited to better manage patient care through earlier intervention
as well as more precise targeted treatments. In this talk\, I will present
some of the new AI methodologies we have been developing to track tumor r
esponse as well as from routinely acquired imaging applied to image-guided
radiation treatments using CT/cone-beam CT as well as MRI-guided precisio
n treatments. I will also present some demonstration studies of how AI bas
ed automated segmentation and tumor as well as healthy tissue change asses
sment can be used to early detect treatment toxicities to enable clinician
s to better manage cancer care. Finally\, I will show how these developed
methods have been put to routine clinical care for automating radiotherapy
treatment planning at MSK.
DTSTART;TZID=America/Los_Angeles:20220316T120000
DTEND;TZID=America/Los_Angeles:20220316T130000
LOCATION:ZOOM: https://stanford.zoom.us/j/99319571697?pwd=c2lhRkN4cXEzTzFzM
UhKaTVJMHZLQT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Using AI for Longitudinal Tumor Response Moni
toring and AI Guided Cancer Treatments: From Lab to Clinic
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-using-ai-for-longitudinal-tumor-response-monitoring-and-ai-guid
ed-cancer-treatments-from-lab-to-clinic/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;
200\;200\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar
/wp-content/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200
\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-conte
nt/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200\,full\;h
ttp://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads
/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3071@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; rtotah@stanford.edu\; https://ibiis.stanford.edu/even
ts/seminars/2022seminars.html
DESCRIPTION:\n
Spyridon (Spyros) Bakas\,
PhD \nAssistant Professor in the Department of Pathology\,
\nLaboratory Medicine\, and of Radiology \nCenter for Biomedical Imag
e Computing and Analytics (CBICA) \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Imaging Analytics for N
euro-Oncology: Towards Computational Diagnostics
\n
Central nervous s
ystem (CNS) tumors come with vastly heterogeneous histologic\, molecular\,
and radiographic landscapes\, rendering their precise characterization ch
allenging. The rapidly growing fields of biophysical modeling and radiomic
s have shown promise in better characterizing the molecular\, spatial\, an
d temporal heterogeneity of tumors. Integrative analysis of CNS tumors\, i
ncluding clinically acquired multi-parametric magnetic resonance imaging (
mpMRI)\, assists in identifying macroscopic quantifiable tumor patterns of
invasion and proliferation\, potentially leading to improved (a) detectio
n/segmentation of tumor subregions and (b) computer-aided diagnostic/progn
ostic/predictive modeling. This talk will touch upon example studies on th
is space\, as well as an overview of the largest to-date real-world federa
ted learning study to detect brain tumor boundaries.
Daniel Marcus\, PhD
\nProfessor of Radiology \nDirector of the Neuroinformatics Research
Group \nDirector of the Neuroimaging Informatics and Analysis Center<
br />\nWashington University
\n
Abstract: \nDeveloping and deplo
ying computational tools for neuro-oncology applications includes a sequen
ce of complex steps to identify appropriate images\, assess image quality\
, annotate\, process and other prepare and manipulate data for analysis. W
e have implemented services and tools on the open source XNAT informatics
platform to automate much of this workflow to improve both its efficiency
and effectiveness. Dr. Marcus will discuss this automated workflow and its
implementation in a number of data sets and applications at Washington Un
iversity.
Lena Maier-Hein\, PhD \nHead of Department\, Computer As
sisted Medical Interventions \nManaging Director\, Data Science and D
igital Oncology \nManaging Director\, National Center for Tumor Disea
ses \nGerman Cancer Research Center
\n
Title: Missing the
(Bench)mark?
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
Abstract
\n
\n
\n
\n
\n
\n
\n
\n
Machine
learning has begun to revolutionize almost all areas of health research. S
uccess stories cover a wide variety of application fields ranging from rad
iology and gastroenterology all the way to mental health. Strikingly\, how
ever\, solutions that perform favorably in research generally do not trans
late well to clinical practice\, and little attention is being given to le
arning from failures. Focusing on biomedical image analysis as a key area
of health-related machine learning\, this talk will present pitfalls\, cav
eats and recommendations related to machine learning-based biomedical imag
e analysis. As a particular highlight\, it will cover yet unpublished work
on two key research questions related to biomedical image analysis compet
itions: 1) How can we best select performance metrics according to the cha
racteristics of the driving biomedical question? And 2) Why is the winner
the best? The results have been compiled based on the input of hundreds of
image analysis researchers worldwide.
\nLauren Oa
kden-Rayner\, PhD \nDirector of Research in Medical Imaging
\nRoyal Adelaide Hospital \nSenior Research Fellow \nAustralian
Institute for Machine Learning
\n
Title: Medical AI
Safety – A Clinical Perspective
\n
Abstract: \nMedical ar
tificial intelligence is rapidly moving into clinics\, particularly in ima
ging-based specialties such as radiology. This transition is producing man
y new challenges\, as the regulatory environment has struggled to keep up
and AI training for healthcare workers is virtually non-existent. Dr. Oakd
en-Rayner will provide a clinical safety perspective on medical AI\, discu
ss a range of identified risks and potential harms\, and discuss possible
solutions to mitigate these risks as this exciting field continues to deve
lop.
\n
Bio: \nDr. Lauren Oakden-Rayner (FRANZC
R\, PhD) is the Director of Research in Medical Imaging at the Royal Adela
ide Hospital and is a senior research fellow at the Australian Institute f
or Machine Learning. Her research explores the safe translation of artific
ial intelligence technologies into clinical practice\, both from a technic
al and clinical perspective. \n
David Magnus\, PhD \nThomas A Raffin Professor of Medicine and B
iomedical Ethics and Professor of Pediatrics\, Medicine\, and by courtesy
of Bioengineering \nDirector\, Stanford Center for Biomedical Ethics<
br />\nAssociate Dean for Research \nStanford University
\n
T
itle: Ethical Challenges in the Application of AI to Healthcare
\n<
p>Abstract: \nThis presentation will focus on three issues. Fi
rst\, applying AI to healthcare requires access to large data sets. Data a
cquisition and data sharing raises a number of challenging ethical issues\
, including challenges to traditional understandings of informed consent\,
and importance of diversity and inclusion in data sources. Second\, I wil
l briefly discuss the widely discussed issues around justice and equity ra
ised by AI in healthcare. Finally\, I will discuss challenges with ethical
oversight and governance\, particularly in relation to research developme
nt of AI. IRB’s are prohibited from considering downstream social conseque
nces and harms to individuals other than research participants when evalua
ting the harms and risks of research. This gap needs to be filled\, partic
ularly as dual uses of AI models are now recognized as a problem.\n
Bio: \nDavid Magnus\, Ph
D is Thomas A. Raffin Professor of Medicine and Biomedical Ethics and Prof
essor of Pediatrics and Medicine and by Courtesy of Bioengineering at Stan
ford University\, where he is Director of the Stanford Center for Biomedic
al Ethics and an Associate Dean of Research. Magnus is member of the Ethic
s Committee for the Stanford Hospital. He is currently the Vice-Chair of t
he IRB for the NIH Precision Medicine Initiative (“All of Us”). He is the
former President of the Association of Bioethics Program Directors\, and i
s the Editor in Chief of the American Journal of Bioethics. He has publish
ed articles on a wide range of topics in bioethics\, including research et
hics\, genetics\, stem cell research\, organ transplantation\, end of life
\, and patient communication. He was a member of the Secretary of Agricult
ure’s Advisory Committee on Biotechnology in the 21st Century and currentl
y serves on the California Human Stem Cell Research Advisory Committee. He
is the principal editor of a collection of essays entitled “Who Owns Life
?” (2002) and his publications have appeared in New England Journal of Med
icine\, Science\, Nature Biotechnology\, and the British Medical Journal.
He has appeared on many radio and television shows including 60 Minutes\,
Good Morning America\, The Today Show\, CBS This Morning\, FOX news Sunday
\, and ABC World News and NPR. In addition to his scholarly work\, he has
published Opinion pieces in the Philadelphia Inquirer\, the Chicago Tribun
e\, the San Jose Mercury News\, and the New Jersey Star Ledger.
DTSTART;TZID=America/Los_Angeles:20220921T133000
DTEND;TZID=America/Los_Angeles:20220921T143000
LOCATION:ZOOM: https://stanford.zoom.us/j/99191454207?pwd=N0ZYWnh1Mks0UEluO
VRUZjdWNHZPUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Ethical Challenges in the Application of AI t
o Healthcare
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-ethical-challenges-in-the-application-of-ai-to-healthcare/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2022/09/david_magnus_ep_44_good.jpg\;200\;200\
,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-conte
nt/uploads/2022/09/david_magnus_ep_44_good.jpg\;200\;200\,large\;http://we
b.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2022/09
/david_magnus_ep_44_good.jpg\;200\;200\,full\;http://web.stanford.edu/grou
p/radweb/cgi-bin/radcalendar/wp-content/uploads/2022/09/david_magnus_ep_44
_good.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3093@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2022seminars.html
DESCRIPTION:\n
Polina Golland\, PhD\nProfessor of Electrical Engineering and Computer Science \nPI i
n the Computer Science and Artificial Intelligence Laboratory \nMassa
chusetts Institute of Technology
\n
Title: Learning to Read X-
Ray: Applications to Heart Failure Monitoring
\n
Abstract: We
propose and demonstrate a novel approach to training image classification
models based on large collections of images with limited labels. We take a
dvantage of availability of radiology reports to construct joint multimoda
l embedding that serves as a basis for classification. We demonstrate the
advantages of this approach in application to assessment of pulmonary edem
a severity in congestive heart failure that motivated the development of t
he method.
Baris Turkbey\, MD\, FSAR \nSenior Cli
nician \nSection Chief of MRI \nSection Chief of Artificial Inte
lligence \nMolecular Imaging Branch \nNational Cancer Institute\
, NIH
\n
Title: Advanced Prostate Cancer Imaging
\n
Talk Objectives:
\n
\n
To discuss current status and
limitations of localized prostate cancer diagnosis.
\n
To discuss
use of artificial intelligence in diagnosis of localized prostate cancer.<
/li>\n
To discuss use of molecular imaging in clinical prostate cancer
management.
\n
\n
Bio: \nDr. Turkbey obtai
ned his medical degree from Hacettepe University in Ankara\, Turkey in 200
3. He completed his residency in Diagnostic and Interventional Radiology a
t Hacettepe University. He joined Molecular Imaging Branch (MIB)\, Nationa
l Cancer Institute\, NIH in 2007. His main research areas are imaging of p
rostate cancer (multiparametric MRI\, PET CT)\, image guided biopsy and tr
eatment techniques (focal therapy\, surgery and radiation therapy) for pro
state cancer and artificial intelligence. Dr. Turkbey is a member of Prost
ate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is th
e Director Magnetic Resonance Imaging section in MIB and the Artificial In
telligence Resource in MIB.
In Person at the Clark Center S360 – Lunch will be p
rovided! \nZoom: https://stanford.zoom.us/j
/99496515255?pwd=MHlXbXM2WXJULzZwemk1WjJHNFZOdz09
\n\n
Anthony Gatti\, PhD \nPostdoctoral Research Fellow
\nDepartment of Radiology \nWu Tsai Human Performance Alliance<
br />\nStanford University
\n
\n
\n
\n
Title: Towards Understanding Knee Health Using Automated MR
I-Based Statistical Shape Models
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Abstract: Knee injuries and pain are prevalent across all age
s\, with varying causes from “anterior knee pain” in runners to osteoarthr
itis-related pain. Osteoarthritis pain is a particular problem because str
uctural outcomes assessed on medical images often disagree with symptoms.
Most studies trying to understand knee health and pain use simple biomarke
rs such as mean cartilage thickness. My talk will present an automated pip
eline for quantifying the whole knee using statistical shape modeling. I w
ill present a conventional statistical shape model as well as a novel appr
oach that uses generative neural implicit representations. Both modeling a
pproaches allow unsupervised identification of salient anatomic features.
I will demonstrate how these features can be used to predict existing radi
ographic outcomes\, patient demographics\, and knee pain.
\n\n
\n
\n
\n<
p>Liangqiong Qu\, PhD \nPostdoctoral Research Fellow \nDe
partment of Biomedical Data Sciences \nStanford University\n
Title: Distributed Deep Learning in Medical Imaging
\n
\n\n
\n
\n
\n
\n
\n
\n
\n
Abstract: Distributed deep
learning is an emerging research paradigm for enabling collaboratively tra
ining deep learning models without sharing patient data. \nIn this ta
lk\, we will first investigate the use distributed deep learning to build
medical imaging classification models in a real-world collaborative settin
g. \nWe then present several strategies to tackle the data heterogene
ity challenge and the lack of quality labeled data challenge in distribute
d deep learning.
Archana Venkataraman\, PhD \nAssociate Profes
sor of Electrical and Computer Engineering \nBoston University
\n<
p>Title: Biologically Inspired Deep Learning as a New Window into B
rain Dysfunction\n
Abstract: Deep learning has disr
upted nearly every major field of study from computer vision to genomics.
The unparalleled success of these models has\, in many cases\, been fueled
by an explosion of data. Millions of labeled images\, thousands of annota
ted ICU admissions\, and hundreds of hours of transcribed speech are commo
n standards in the literature. Clinical neuroscience is a notable holdout
to this trend. It is a field of unavoidably small datasets\, massive patie
nt variability\, and complex (largely unknown) phenomena. My lab tackles t
hese challenges across a spectrum of projects\, from answering foundationa
l neuroscientific questions to translational applications of neuroimaging
data to exploratory directions for probing neural circuitry. One of our ke
y strategies is to integrate a priori information about the brain a
nd biology into the model design.
\n
This talk will highlight two ong
oing projects that epitomize this strategy. First\, I will showcase an end
-to-end deep learning framework that fuses neuroimaging\, genetic\, and ph
enotypic data\, while maintaining interpretability of the extracted biomar
kers. We use a learnable dropout layer to extract a sparse subset of predi
ctive imaging features and a biologically informed deep network architectu
re for whole-genome analysis. Specifically\, the network uses hierarchical
graph convolution that mimic the organization of a well-established gene
ontology to track the convergence of genetic risk across biological pathwa
ys. Second\, I will present a deep-generative hybrid model for epileptic s
eizure detection from scalp EEG. The latent variables in this model captur
e the spatiotemporal spread of a seizure\; they are complemented by a nonp
arametric likelihood based on convolutional neural networks. I will also h
ighlight our current end-to-end extensions of this work focused on seizure
onset localization. Finally\, I will conclude with exciting future direct
ions for our work across the foundational\, translational\, and explorator
y axes.
DTSTART;TZID=America/Los_Angeles:20230118T120000
DTEND;TZID=America/Los_Angeles:20230118T130000
LOCATION:Zoom: https://stanford.zoom.us/j/96155849129?pwd=MTVtenF6RWdHMEwwd
EZoV3NhM0svUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Zoom Seminar: Biologically Inspired Deep Learning as a
New Window into Brain Dysfunction
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-biologically-inspired-deep-learning-as-a-new-window-into-brain-
dysfunction/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/01/Picture1-298x300.jpg\;298\;300\,medium
\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uplo
ads/2023/01/Picture1-298x300.jpg\;298\;300\,large\;http://web.stanford.edu
/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023/01/Picture1-298x
300.jpg\;298\;300\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radc
alendar/wp-content/uploads/2023/01/Picture1-298x300.jpg\;298\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3120@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Andrew Janowczyk\, P
hD \nAssistant Professor \nDepartment of Biomedical Engineer
ing \nEmory University
\n
Title: Computational Pathology:
Towards Precision Medicine
\n
Abstract: \nRoughly 40% of
the population will be diagnosed with some form of cancer in their lifeti
me. In a large majority of these cases\, a definitive cancer diagnosis is
only possible via histopathologic confirmation on a tissue slide. With the
increasing popularity of the digitization of pathology slides\, a wealth
of new untapped data is now regularly being created.
\n
Computational
analysis of these routinely captured H&E slides is facilitating the creat
ion of diagnostic tools for tasks such as disease identification and gradi
ng. Further\, by identifying patterns of disease presentation across large
cohorts of retrospectively analyzed patients\, new insights for predictin
g prognosis and therapy response are possible [1\,2]. Such biomarkers\, de
rived from inexpensive histology slides\, stand to improve the standard of
care for all patient populations\, especially where expensive genomic tes
ting may not be readily available. Moreover\, since numerous other disease
s and disorders\, such as oncoming clinical heart failure [3]\, are simila
rly diagnosed via pathology slides\, those patients also stand to benefit
from these same technological advances in the digital pathology space.
\n
This talk will discuss our research aimed towards reaching the goal o
f precision medicine\, wherein patients receive optimized treatment based
on historical evidence. The talk discusses how the applications of deep le
arning in this domain are significantly improving the efficiency and robus
tness of these models [4]. Numerous challenges remain\, though\, especiall
y in the context of quality control and annotation gathering. This talk fu
rther introduces the audience to open-source tools being developed and dep
loyed to meet these pressing needs\, including quality control (histoqc.co
m [5])\, annotation (quickannotator.com)\, labeling (patchsorter.com)\, va
lidation (cohortfinder.com).
Meli
ssa McCradden\, PhD \nJohn and Melinda Thompson Director of Artif
icial Intelligence in Medicine \nIntegration Lead\, AI in Medicine In
itiative \nBioethicist\, The Hospital for Sick Children (SickKids) \nAssociate Scientist\, Genetics & Genome Biology \nAssistant Prof
essor\, Dalla Lana School of Public Health
\n
Title: What Make
s a ‘Good’ Decision? An Empirical Bioethics Study of Using AI at the Bedsi
de
\n
Abstract: This presentation will identify the gap betwee
n AI accuracy and making good clinical decisions. I will present a study w
here we develop an ethical framework for clinical decision-making that can
help clinicians meet medicolegal and ethical standards when using AI that
does not rely on explainability\, nor perfect accuracy of the model.
DTSTART;TZID=America/Los_Angeles:20230315T120000
DTEND;TZID=America/Los_Angeles:20230315T130000
LOCATION:https://stanford.zoom.us/j/96612401401?pwd=WFNJb2Q4dStoVDE5a25BYTB
kMjN4QT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: What Makes a ‘Good’ Decision? An Empirical Bi
oethics Study of Using AI at the Bedside
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-what-makes-a-good-decision-an-empirical-bioethics-study-of-usin
g-ai-at-the-bedside/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.12.28-AM-
247x300.png\;247\;300\,medium\;http://web.stanford.edu/group/radweb/cgi-bi
n/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.12.2
8-AM-247x300.png\;247\;300\,large\;http://web.stanford.edu/group/radweb/cg
i-bin/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.
12.28-AM-247x300.png\;247\;300\,full\;http://web.stanford.edu/group/radweb
/cgi-bin/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-
10.12.28-AM-247x300.png\;247\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3134@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Marzyeh Ghassemi\, PhD<
/b> \nAssistant Professor\, Department of Electrical Engineering and
Computer Science \nInstitute for Medical Engineering & Science
\nMassachusetts Institute of Technology (MIT) \nCanadian CIFAR AI Cha
ir at Vector Institute
\n
Title: Designing Machine Learning Pr
ocesses For Equitable Health Systems
\n
Abstract \nDr. Marzyeh Ghassemi focuses on creating and applying machine learnin
g to understand and improve health in ways that are robust\, private and f
air. Dr. Ghassemi will talk about her work trying to train models that do
not learn biased rules or recommendations that harm minorities or minoriti
zed populations. The Healthy ML group tackles the many novel technical opp
ortunities for machine learning in health\, and works to make important pr
ogress with careful application to this domain.
Hoifung Poon\, PhD \nGeneral Manager at Health Futures o
f Microsoft Research \nAffiliated Professor at the University of Wash
ington Medical School.
\n
Title: Advancing Health at the Speed
of AI
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
Abstract: The dream of precision health is to develop a data-driven\, continuous
learning system where new health information is instantly incorporated to
optimize care delivery and accelerate biomedical discovery. In reality\,
however\, the health ecosystem is plagued by overwhelming unstructured dat
a and unscalable manual processing. Self-supervised AI such as large langu
age models (LLMs) can supercharge structuring of biomedical data and accel
erate transformation towards precision health. In this talk\, I’ll present
our research progress on biomedical AI for precision health\, spanning bi
omedical LLMs\, multi-modal learning\, and causal discovery. This enables
us to extract knowledge from tens of millions of publications\, structure
real-world data for millions of cancer patients\, and apply the extracted
knowledge and real-world evidence to advancing precision oncology in deep
partnerships with real-world stakeholders.
\n
\n
\n
\n
div>\n
\n
DTSTART;TZID=America/Los_Angeles:20230426T143000
DTEND;TZID=America/Los_Angeles:20230426T153000
LOCATION:LKSC 120 and remote via Zoom @ https://stanford.zoom.us/j/92666973
395?pwd=SHpzVmVPMEFYRXQ5Skp5eG1vcXBrdz09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Advancing Health at the Speed of AI
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-advancing-health-at-the-speed-of-ai/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198\,medium
\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uplo
ads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198\,large\;http://web.stanford.edu
/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023/04/Hoifung-Poon-
PhD.jpg\;200\;198\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radc
alendar/wp-content/uploads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3144@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Despina Kontos\, PhD \nMatthew J. W
ilson Professor of Research Radiology II \nAssociate Vice-Chair for R
esearch\, Department of Radiology \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Radiomics and Radiogeno
mics: The Role of Imaging\, Machine Learning\, and AI\, as a Biomarker for
Cancer Prognostication and Therapy Response Evaluation
\n
Abstrac
t: Cancer is a heterogeneous disease\, with known inter-tumor and intr
a-tumor heterogeneity in solid tumors. Established histopathologic prognos
tic biomarkers generally acquired from a tumor biopsy may be limited by sa
mpling variation. Radiomics is an emerging field with the potential to lev
erage the whole tumor via non-invasive sampling afforded by medical imagin
g to extract high throughput\, quantitative features for personalized tumo
r characterization. Identifying imaging phenotypes via radiomics analysis
and understanding their relationship with prognostic markers and patient o
utcomes can allow for a non-invasive assessment of tumor heterogeneity. Re
cent studies have shown that intrinsic radiomic phenotypes of tumor hetero
geneity for cancer may have independent prognostic value when predicting d
isease aggressiveness and recurrence. The independent prognostic value of
imaging heterogeneity phenotypes suggests that radiogenomic phenotypes can
provide a non-invasive characterization of tumor heterogeneity to augment
genomic assays in precision prognosis and treatment.
DTSTART;TZID=America/Los_Angeles:20230517T120000
DTEND;TZID=America/Los_Angeles:20230517T130000
LOCATION:Clark Center S360 - Zoom Details on IBIIS website @ 318 Campus Dri
ve
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Radiomics and Radiogenomics: The Role of Imag
ing\, Machine Learning\, and AI\, as a Biomarker for Cancer Prognosticatio
n and Therapy Response Evaluation
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-radiomics-and-radiogenomics-the-role-of-imaging-machine-learnin
g-and-ai-as-a-biomarker-for-cancer-prognostication-and-therapy-response-ev
aluation/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/05/kont4311.jpg\;200\;200\,medium\;http:/
/web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023
/05/kont4311.jpg\;200\;200\,large\;http://web.stanford.edu/group/radweb/cg
i-bin/radcalendar/wp-content/uploads/2023/05/kont4311.jpg\;200\;200\,full\
;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploa
ds/2023/05/kont4311.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3150@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T040217Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Daguang Xu\, PhD \nSenior Research Manager \nNVIDIA Healthcare
\n
Title: Industrial Applied Research i
n Healthcare and Federated Learning at NVIDIA
\n
Abstract: As
the market leader in deep learning and parallel computing\, NVIDIA is full
y committed to advancing applied research in medical imaging. Our goal is
to revolutionize the capabilities of medical doctors and radiologists by e
quipping them with powerful tools and applications based on deep learning.
We firmly believe that the integration of deep learning and accelerated A
I will have a profound impact on the life sciences\, medicine\, and the he
althcare industry as a whole. To drive this transformative process\, NVIDI
A is actively democratizing deep learning through the provision of a compr
ehensive AI computing platform specifically designed for the healthcare co
mmunity. These GPU-accelerated solutions not only promote collaboration bu
t also prioritize the security of each institution’s information. By doing
so\, we are fostering a collective effort in harnessing the potential of
deep learning to benefit healthcare.
\n
During this talk\, I will sho
wcase remarkable research achievements accomplished by NVIDIA’s deep learn
ing in medical imaging team. This includes breakthroughs in segmentation\,
self-supervised learning\, federated learning\, and other related areas.
Additionally\, I will provide insights into the exciting avenues of resear
ch that our team is currently exploring.
Negar Golestani\, PhD \nPostdoctoral Research Fellow\nDepartment of Radiology \nStanford University
\n
\n
Title: AI in Radiology-Pathology Fusion Towards Precise Breast Cancer Detection<
/p>\n
Abstract: Breast cancer is a global public health concern with various treatmen
t options based on tumor characteristics. Pathological examination of exci
sed tissue after surgery provides important information for treatment deci
sions. This pathology processing involving the manual selection of represe
ntative sections for histological examination is time-consuming and subjec
tive\, which can lead to potential sampling errors. Accurately identifying
residual tumors is a challenging task\, which highlights the need for sys
tematic or assisted methods. Radiology-pathology registration is essential
for developing deep-learning algorithms to automate cancer detection on r
adiology images. However\, aligning faxitron and histopathology images is
difficult due to content and resolution differences\, tissue deformation\,
artifacts\, and imprecise correspondence. We propose a novel deep learnin
g-based pipeline for affine registration of faxitron images (x-ray represe
ntations of macrosections of ex-vivo breast tissue) with their correspondi
ng histopathology images. Our model combines convolutional neural networks
(CNN) and vision transformers (ViT)\, capturing local and global informat
ion from the entire tissue macrosection and its segments. This integrated
approach enables simultaneous registration and stitching of image segments
\, facilitating segment-to-macrosection registration through a puzzling-ba
sed mechanism. To overcome the limitations of multi-modal ground truth dat
a\, we train the model using synthetic mono-modal data in a weakly supervi
sed manner. The trained model successfully performs multi-modal registrati
on\, outperforms existing baselines\, including deep learning-based and it
erative models\, and is approximately 200 times faster than the iterative
approach. The application of proposed registration method allows for the p
recise mapping of pathology labels onto radiology images\, thereby establi
shing ground truth labels for training classification and detection models
on radiological data. This work bridges the gap in current research and c
linical workflow\, offering potential improvements in efficiency and accur
acy for breast cancer evaluation and streamlining pathology workflow.
\n
\n
Jean Benoit Delbrouck\, PhD \nResearch Scientist \nDepartment of Radiology \nStanford Unive
rsity \n
\n
<
strong>Title: Generating Accurate and Factually Correct Medical T
ext \nAbstract: Generating factually correct medical
text is of utmost importance due to several reasons. Firstly\, patient sa
fety is heavily dependent on accurate information as medical decisions are
often made based on the information provided. Secondly\, trust in AI as a
reliable tool in the medical field is essential\, and this trust can only
be established by generating accurate and reliable medical text. Lastly\,
medical research also relies heavily on accurate information for meaningf
ul results.
\n
Recent studies have explore
d new approaches for generating medical text from images or findings\, ran
ging from pretraining to Reinforcement Learning\, and leveraging expert an
notations. However\, a potential game changer in the field is the integrat
ion of GPT models in pipelines for generating factually correct medical te
xt for research or production purposes.
Bram van G
inneken\, PhD \nProfessor of Medical Image Analysis \nChair
of the Diagnostic Image Analysis Group \nRadboud University Medical C
enter
\n
Title: Why AI Should Replace Radiologists
\n
Abstract: \nIn this talk\, I will provide arguments for the thesi
s that nearly all diagnostic radiology could be performed by computers and
that the notion that AI will not replace radiologists is only temporarily
true. Some well-known and lesser-known examples of AI systems analyzing m
edical images with a stand-alone performance on par or beyond human expert
s will be presented. I will show that systems built by academia\, in colla
borative efforts\, may even outperform commercially available systems. Nex
t\, I will sketch a way forward to implement automated diagnostic radiolog
y and argue that this is needed to keep healthcare affordable in societie
s wrestling with aging populations. Some pitfalls\, like excessive demands
for trials\, will be discussed. The key to success is to convince radiolo
gists to take the lead in this process. They need to collaborate with AI d
evelopers\, but AI developers and the medical device industry should not l
ead this process. Radiologists should\, in fact\, stop training radiologis
ts\, and instead\, start training machines.
Andrey Fedorov\, PhD <
br />\nAssociate Professor\, Harvard Medical School \nLead Investigat
or\, Brigham and Women’s Hospital
\n
\n<
div class='adaptiveimage text-image row'>\n
\n
Title: NCI Imaging Data Commons:Towards Transparenc
y\, Reproducibility\, and Scalability in Imaging AI
\n
\n
\n
\n
\n
\n
\n
\n
\n<
div class='tab-text'>\n
Abstract \nThe re
markable advances of artificial intelligence (AI) technology are revolutio
nizing established approaches to the acquisition\, interpretation\, and an
alysis of biomedical imaging data. Development\, validation\, and continuo
us refinement of AI tools requires easy access to large high-quality anno
tated datasets\, which are both representative and diverse. The National C
ancer Institute (NCI) Imaging Data Commons (IDC) hosts over 50 TB of diver
se publicly available cancer image data spanning radiology and microscopy
domains. By harmonizing all data based on industry standards and colocali
zing it with analysis and exploration resources\, IDC aims to facilitate t
he development\, validation\, and clinical translation of AI tools and add
ress the well-documented challenges of establishing reproducible and tran
sparent AI processing pipelines. Balanced use of established commercial pr
oducts with open-source solutions\, interconnected by standard interfaces
\, provides value and performance\, while preserving sufficient agility to
address the evolving needs of the research community. Emphasis on the dev
elopment of tools\, use cases to demonstrate the utility of uniform data r
epresentation\, and cloud-based analysis aim to ease adoption and help de
fine best practices. Integration with other data in the broader NCI Cancer
Research Data Commons infrastructure opens opportunities for multiomics s
tudies incorporating imaging data to further empower the research communit
y to accelerate breakthroughs in cancer detection\, diagnosis\, and treatm
ent. The presentation will discuss the recent developments in IDC\, highli
ghting resources\, demonstrations and examples that we hope can help you i
mprove your everyday imaging research practices – both those that use publ
ic and internal datasets.
\n
\n
\n
\n
\n
\n
DTSTART;TZID=America/Los_Angeles:20240320T120000
DTEND;TZID=America/Los_Angeles:20240320T130000
LOCATION:Clark Center S360 - Zoom Details on IBIIS website @ 318 Campus Dri
ve
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar – NCI Imaging Data Commons: Towards Transparen
cy\, Reproducibility\, and Scalability in Imaging AI
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-nci-imaging-data-commons-towards-transparency-reproducibility-a
nd-scalability-in-imaging-ai/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2024/03/Andrey-Fedorov.jpg\;200\;200\,medium\;
http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/upload
s/2024/03/Andrey-Fedorov.jpg\;200\;200\,large\;http://web.stanford.edu/gro
up/radweb/cgi-bin/radcalendar/wp-content/uploads/2024/03/Andrey-Fedorov.jp
g\;200\;200\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radcalenda
r/wp-content/uploads/2024/03/Andrey-Fedorov.jpg\;200\;200
END:VEVENT
END:VCALENDAR