Mixed Realit
y for Surgical Guidance will take place on Thursday\, April 1st from 9:00
– 10:30 am PDT.
\n
The event will start with a one-hour panel discuss
ion featuring Dr. Bruce Daniel of Stanford Radiology and the Stanford IMME
RS Lab\; Christoffer Hamilton of Brainlab\, a surgical software and hardwa
re leader in Germany\; and Dr. Thomas Grégory of Orthopedic Surgery at the
Université Sorbonne Paris Nord.
\n
This panel will be moderated by D
r. Christoph Leuze of Stanford University and the Stanford Medical Mixed R
eality (SMMR) program.
\n
Immediately following the panel discussion\
, you are also invited to a 30-minute interactive session with the panelis
ts where questions and ideas can be explored in real time.
Background
: The U.S. Federal Government enacted the Screen for Abdominal Ao
rtic Aneurysms Very Efficiently Act in January 2007. Simultaneously\, the
Department of Veterans Affairs (VA) implemented a more inclusive AAA scree
ning policy for veteran beneficiaries shortly afterwards.
\n
\n<
p>Our study aimed to evaluate the impact of the VA program on AAA detectio
n rate and all-cause mortality compared to a cohort of patients whose aneu
rysms were identified by other abdominal imaging.\n
\n
Methods: We identified veterans with an AAA screening study usi
ng the two existing Current Procedural Terminology (CPT) codes (G
0389 and 76706). In the comparison group\, eligible abdominal imaging stu
dies included ultrasound\, computed tomography (CT)\, and magnetic resonan
ce imaging (MRI) queried according to CPT codes between 2001 and 2018.
\n
\n
We used a difference-in-differences regression model to eva
luate the change in aneurysm detection rate and all-cause mortality five y
ears before and eleven years after the VA implemented the screening policy
in 2007.
\n
\n
We calculated survival estimates after AAA scr
eening or non-screening imaging of patients with or without AAA diagnosis
and used multivariate Cox regression model to evaluate mortality in patien
ts with a positive AAA diagnosis adjusting for patient characteristics and
comorbidities.
\n
\n
Results: We identified
3.9 million veterans with abdominal imaging\, a total of 303\,664 of whom
were coded has having an AAA US screening between 2007 and 2018. An AAA di
agnosis was made in 4.84% of the screening group vs. 1.3% in the non-scree
ning imaging group P<0.001\, yet more aneurysms were found with g
eneral imaging studies (50\,730 vs.15\,449) (Fig 1).
\n
\n
On
Kaplan-Meier survival analysis\, patients with an AAA diagnosis had higher
overall mortality than patients who screened normal\; patients with aneur
ysms found with non-screening imaging had the highest mortality\, log-rank
P<0.001 (Fig 2).
\n
\n
The difference in differences
regression analysis\, showed that the absolute AAA detection rate was 1.5
5% higher (95% CI 1.2- 1.8)\, and the mortality was 13.89 % lower (95% CI
10.18 %-16.66 %) after the introduction of the screening program in 2007.<
/p>\n
\n
Multivariate Cox regression analysis in patients with AA
A diagnosis (65-74-year-old) demonstrated a significantly lower 5-year mor
tality [HR 0.45 (95% CI 0.43-0.48)] for patients in the US Screening group
P<0.001.
\n
\n
Conclusions:
In a nationwide analysis of VA patients\, implementation of AAA
screening was associated with improved survival and a higher rate of AAA d
iagnosis. These findings provide further support for this program’s contin
uation versus defaulting to incidental recognition following other abdomin
al imaging.
\n
\n
ABOUT MANUEL GARCIA-TOCA \nDr. Garcia-Toca earned his medical degree at the Universidad Anahuac
in Mexico 1999. He has a master’s degree in Health Policy from Stanford Un
iversity.
\n
\n
He received his general surgery training at th
e Massachusetts General Hospital and Brown University in 2008. He then com
pleted a Vascular Surgery fellowship at Northwestern University in 2010. D
r. Garcia-Toca is board certified in both surgery and vascular surgery.
\n
\n
Dr. Garcia-Toca joined Stanford Vascular Surgery in 2015.
He is currently Clinical Professor of Surgery in the Division of Vascular
Surgery. Dr. Garcia-Toca had previously served as an Assistant Professor o
f Surgery at Brown University. Dr. Garcia Toca is a Staff Surgeon at Sant
a Clara Valley Medical Center in San Jose.
\n
\n
His research
interests include new therapeutic strategies and outcomes for the manageme
nt of vascular trauma\, cerebrovascular diseases\, dialysis access\, aorti
c dissection and aneurysms.
\n
\n
ABOUT OLIVER O. AALA
MI \nDr. Aalami is a Clinical Associate Professor of Vascula
r & Endovascular Surgery at Stanford University and the Palo Alto VA and s
erves as the Lead Director of Stanford’s Biodesign for Digital Health. He
is the course director for Biodesign for Digital Health\, Building for Di
gital Health and co-founder of the open source project\, CardinalKit\, de
veloped to support sensor-based mobile research projects. His primary res
earch focuses on clinically validating the sensors in smartphones and smar
twatches in patients with cardiovascular disease to further precision heal
th implementation.
\n
\n
Hosted by: Garry Gold\, M.D. \nSponsored by the PHIND Center and the Department of Radiology<
/em>
DTSTART;TZID=America/Los_Angeles:20210420T110000
DTEND;TZID=America/Los_Angeles:20210420T120000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:PHIND Seminar – Manuel Garcia-Toca\, M.D. & Oliver O. Aalami\, M.D.
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/phind-se
minar-manuel-garcia-toca-m-d-oliver-o-aalami-m-d/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/2020.4_SpeakerMashUp_V2-01-150x150.png
\;150\;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcal
endar/wp-content/uploads/2019/10/2020.4_SpeakerMashUp_V2-01-300x150.png\;3
00\;150\;1\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalenda
r/wp-content/uploads/2019/10/2020.4_SpeakerMashUp_V2-01-1024x513.png\;640\
;321\;1\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp
-content/uploads/2019/10/2020.4_SpeakerMashUp_V2-01.png\;1050\;526\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/8616164417003/WN_5z
--vTmvRu6l62kOUd9sZg
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2417@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:Canary Center\,IBIIS\,MIPS\,PHIND\,Radiology\,RSL
CONTACT:Marta Flory\; flory@stanford.edu
DESCRIPTION:
Targeted violence co
ntinues against Black Americans\, Asian Americans\, and all people of colo
r. The department of radiology diversity committee is running a racial equ
ity challenge to raise awareness of systemic racism\, implicit bias and re
lated issues. Participants will be provided a list of resources on these t
opics such as articles\, podcasts\, videos\, etc.\, from which they can ch
oose\, with the “challenge” of engaging with one to three media sources pr
ior to our session (some videos are as short as a few minutes). Participan
ts will meet in small-group breakout sessions to discuss what they’ve lear
ned and share ideas.
\n
Please reach out to Marta Flory\, flory@stanford.edu with questions. For detail
s about the session\, including recommended resources and the Zoom link\,
please reach out to Meke Faaoso at m
faaoso@stanford.edu.
DTSTART;TZID=America/Los_Angeles:20210430T120000
DTEND;TZID=America/Los_Angeles:20210430T130000
LOCATION:Zoom
SEQUENCE:0
SUMMARY:Racial Equity Challenge: Race in society
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/racial-e
quity-challenge-race-in-society/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/04/shield-150x150.png\;150\;150\;1\,mediu
m\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/upl
oads/2021/04/shield.png\;225\;225\;
X-TICKETS-URL:https://docs.google.com/spreadsheets/d/1ehKqHm32peHcm7NQJ427O
aKIa9JpfHVunjBk66etZGc/edit?usp=sharing
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2421@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:PHIND\,PHIND Seminar Series
CONTACT:Ashley Williams\; ashleylw@stanford.edu
DESCRIPTION:
PHIND
Seminar Series: Multi-Cancer Early Detection Screening Tests – “Liquid
Biopsy Tests” – Are Here – But Will Payers Provide Insurance Coverage?
\n
\n
Patricia A. Deverka\, MD\, MS\, MBE<
br />\nExecutive Director \nDeverka Consulting\, LLC
\n
\n<
p>Kathryn A. Phillips\, PhD \nProfessor of Health Ec
onomics and Health Services Research \nFounding Director\, UCSF C
enter for Translational and Policy Research on Personalized Medicine
(TRANSPERS)\n
11:00am – 12:00pm Seminar & Discussion \nRSVP Here
\n
\n
ABSTRACT
\nThe emergence of Multi-Cancer Early Detection Screening Tests (MCED) – “
liquid biopsy screening tests” – has generated enormous interest because t
hey could fundamentally shift how cancer screening is done. One company is
already offering an MCED test for clinical use as a “lab developed test”
(LDT) – and thus addressing the question of “who will pay” has become urge
nt. These tests offer potentially transformative screening and clinical be
nefits\, but their characteristics present unique challenges to payer cove
rage decision-making and generate concerns about the potentially high cost
of widespread adoption.
\n
We will present our ongoing work on exami
ning the unique challenges that MCED present for payer coverage decision-m
aking\, drawing on our extensive experience with coverage and reimbursemen
t for new technologies. We will focus on identifying the evidence generati
on strategies that could be pursued now to inform payer decision-making so
that coverage policies can be developed that are appropriate and equitabl
e for this ground-breaking technology.
\n
\n
ABOUT PAT
RICIA A. DEVERKA \nDr. Deverka is the Executive Director at
Deverka Consulting\, LLC where she focuses on helping biotechnology compan
ies and start-ups develop evidence to support payer coverage and clinical
adoption of innovative technologies. Her most recent projects have focuse
d on breakthrough tests and drugs focused on population genomic screening\
, cancer\, and ultra-rare disorders. Prior to starting her consulting pra
ctice\, Dr. Deverka has worked in the fields of health economics and outco
mes research in both non-profit and for-profit settings as a researcher\,
educator\, and department head. She has extensive experience with patient-
centered outcomes research\, drug and diagnostic reimbursement planning\,
cost- effectiveness analysis\, and bioethical issues surrounding the use o
f new technologies. While working in academia and several non-profit firms
\, she has participated in numerous NIH-funded studies to evaluate policy
barriers to clinical integration of new genomic technologies and has publi
shed extensively on strategies to promote evidence generation and data sha
ring. She is a member of the National Human Genome Research Institute (NHG
RI)’s Genomic Medicine Work Group and serves as a member of NHGRI’s Adviso
ry Council. Deverka has a medical degree from the University of Pittsburgh
and is board certified in General Preventive Medicine and Public Health.
She also has a master’s degree in bioethics from the University of Pennsy
lvania and completed a policy fellowship at Duke University’s Institute fo
r Genome Sciences and Policy.
\n
\n
ABOUT KATHRYN A. P
HILLIPS \nKathryn A. Phillips founded and leads the UCSF
Center for Translational and Policy Research on Personalized Medicine (TRANSPERS)\, which focuses on developing objective evidence on how to e
ffectively\, efficiently\, and equitably implement precision/personalized
medicine into health care. Kathryn has published over 150 peer-reviewed ar
ticles in major journals including JAMA\, New England Journal
of Medicine\, Science\, and Health Affairs. She ha
s had continuous funding from NIH as a PI for over 25 years and was recent
ly awarded a 5-year NIH grant to examine payer coverage and economic value
for emerging genomic technologies (cell-free DNA tests and tests based on
polygenic risk scores). Kathryn serves on the editorial boards for He
alth Affairs\, Value in Health\, JAMA Internal Medicine<
/em>\, Genetics in Medicine\; is a member of the National Academy
of Medicine Roundtable on Genomics and Precision Health\; and has served
on the governing Board of Directors for GenomeCanada and as an advisor to
the FDA\, CDC\, and the President’s Council of Advisors on Science and Tec
hnology. She has also served as an advisor to many diagnostics\, sequencin
g\, and pharmaceutical companies. Kathryn is Chair of the Global Econo
mics and Evaluation of Clinical Sequencing Working Group\, and a memb
er of an evidence review committee for the Institute for Clinical and
Economic Review (ICER).
\n
\n
\n<
p>Hosted by: Garry Gold\, M.D. \nSponsored by the PHIND
Center and the Department of Radiology\n
DTSTART;TZID=America/Los_Angeles:20210518T110000
DTEND;TZID=America/Los_Angeles:20210518T120000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:PHIND Seminar – Patricia A. Deverka\, MD\, MS\, MBE & Kathryn A. Ph
illips\, PhD
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/phind-se
minar-patricia-a-deverka-md-ms-mbe-kathryn-a-phillips-phd/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/2020.5_SpeakerMashUp-150x126.jpg\;150\
;126\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/
wp-content/uploads/2019/10/2020.5_SpeakerMashUp.jpg\;252\;126\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/9516200549922/WN_q4
_OV6KhRe6MKb_cPEC3GQ
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2603@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:IMMERS Series\,SMMR
CONTACT:Steffi Perkins\; slp979@stanford.edu\; https://med.stanford.edu/imm
ers/smmr.html
DESCRIPTION:
Join us
for a panel on Behavioral XR on Thursday\, June 3rd from 9:00 –
10:30 am PDT.The event will start with a one-hour panel disc
ussion featuring Dr. Elizabeth
McMahon\, a psychologist with a private practice in California\; Sarah Hill of Healium\, a company developing XR apps for
mental fitness based in Missouri\; Christian Angern of Sympat
ient\, a company developing VR for anxiety therapy based in Germany\;
and Marguerite
Manteau-Rao of Penumbra\, a
medical device company based in California. This panel will be moderated
by Dr. Walt
er Greenleaf of Stanford’s Virtua
l Human Interaction Lab (VHIL) and Dr. Christoph Leuze of the Stanford Medical Mixed Reality (SMMR) pro
gram. Immediately following the panel discussion\, you are also invited t
o a 30-minute interactive session with the panelists where questions and i
deas can be explored in real time.
\n
\n
Reg
ister here to save your place now! After registering\, you will r
eceive a confirmation email containing information about joining the meeti
ng.
\n
\n
Please visit thi
s page to subscribe to our events mailing list.
\n
\n
Sponsored by Stanford Medical Mixed Reality (SMMR)
\n
Tickets: <
a class='ai1ec-ticket-url-exported' href='https://stanford.zoom.us/meeting
/register/tJEvf-ioqTwvHNC2DABwGFESBe71rC6G6qV-'>https://stanford.zoom.us/m
eeting/register/tJEvf-ioqTwvHNC2DABwGFESBe71rC6G6qV-.
PHIND Seminar Series: Pervasive Computing With Everyday Devic
es To Build & Sustain Resilience & Wellbeing
\n
Pablo E. Paredes\, Ph
D \nClinical Assistant Professor\, Psychiatry and Behavi
oral Sciences and\, by courtesy\, Epidemiology and Population Health
\nStanford University
ABSTRACT \nAs society progresses towards increasi
ng pervasive computing levels\, I design and build technology-enabled solu
tions to repurpose everyday devices to help people build resilience and gr
ow wellbeing. I leverage biological and behavioral knowledge to design sys
tems that balance user needs and health outcomes while mitigating surveill
ance and agency risks. In this talk\, I present my research on efficacious
and engaging sensors and interventions necessary in the population and pu
blic health domains. I share a series of research projects exploring and v
alidating novel ideas on passive sensors – less dependent on subjective su
rveys or wearables – and subtle interventions that minimize workflow disr
uption. I show the promise of repurposing existing signals from computing
peripherals (i.e.\, mouse and trackpad) or cars (steering wheel) into “sen
sorless” sensors and repurposing existing media as just-in-time micro-inte
rventions that can work across multiple scenarios and populations. I discu
ss how these data could be used in collaboration with domain experts to st
udy topics as varied as the interaction between stress and productivity in
office workers\, burnout prevention among clinical practitioners\, or the
prevention of depression among rural health workers. Finally\, grounded i
n theories from neuroscience and behavioral economics\, I propose the evol
ution of everyday “mundane” devices\, such as chairs\, desks\, cars\, or e
ven urban lights\, into adaptive and autonomous wellbeing-optimizing inter
ventions. I close with a discussion of the research needed to systematical
ly study ethics in pervasive technology for resilience\, and wellbeing.
\n
\n
ABOUT \nPablo Paredes earned his Ph.
D. in Computer Science from the University of California\, Berkeley\, in 2
015 with Prof. John Canny. He is currently a Clinical Assistant Professor
in the Psychiatry and Behavioral Sciences Department and the Epidemiology
and Population Health Department (by courtesy) at the Stanford University
School of Medicine. He leads the Pervasive Wellbeing Technology Lab\, whic
h houses a diverse group of students from multiple departments such as com
puter science\, electrical engineering\, mechanical engineering\, anthropo
logy\, neuroscience\, and linguistics. Before joining the School of Medici
ne\, Dr. Paredes was a Postdoctoral Researcher in the Computer Science Dep
artment at Stanford University with Prof. James Landay. During his Ph.D. c
areer\, he held internships on behavior change and affective computing at
Microsoft Research and Google. He has been an active associate editor for
the Interactive\, Mobile\, Wireless\, and Ubiquitous Technology Journal (I
MWUT) and a reviewer and editor for multiple top CS and medical journals.
Before 2010\, he was a senior strategic manager with Intel in Sao Paulo\,
Brazil\, a lead product manager with Telefonica in Quito\, Ecuador\, and a
n entrepreneur in his native Ecuador and\, more recently\, in the US. In t
hese roles\, he has had the opportunity to hire and closely evaluate desig
ners\, engineers\, business people\, and researchers in telecommunications
and product development. During his academic career\, Dr. Paredes has adv
ised close to 40 mentees\, including postdocs\, Ph.D.\, master’s\, and und
ergraduate students\, collaborated with colleagues from multiple departmen
ts across engineering\, medicine\, and the humanities\, and raised funding
from NSF\, NIH\, and large multidisciplinary intramural research projects
.
\n
\n
Hosted by: Garry Gold\, M.D. \nSpons
ored by the PHIND Center and the Department of Radiology
DTSTART;TZID=America/Los_Angeles:20210623T151500
DTEND;TZID=America/Los_Angeles:20210623T161500
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:“The Invisible Future of Health Monitoring” – PHIND & CDH Seminar
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/the-invi
sible-future-of-health-monitoring-phind-cdh-seminar/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/June-23rd-webinar-external-calendar-ti
le-01-150x150.png\;150\;150\;1\,medium\;http://web.stanford.edu/group/radw
eb/cgi-bin/radcalendar/wp-content/uploads/2019/10/June-23rd-webinar-extern
al-calendar-tile-01-300x205.png\;300\;205\;1\,large\;http://web.stanford.e
du/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2019/10/June-23rd-w
ebinar-external-calendar-tile-01-1024x700.png\;640\;438\;1\,full\;http://w
eb.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2019/1
0/June-23rd-webinar-external-calendar-tile-01.png\;1251\;855\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/7016228432975/WN_7R
pA06gIQICRCH6bzQjt3w
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2803@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI\,IBIIS\,Radiology\,RSL
CONTACT:
DESCRIPTION:
\n
Radiology Department-Wide Research
Meeting
\n
• Research Announcements \n• Mirabela Rusu\,
PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the
Gap between Digital Pathologists and Digital Radiologists \n• Akshay
Chaudhari\, PhD – Data-Efficient Machine Learning for Medical Imaging
11:00am – 12
:00pm Seminar & Discussion \nRSVP Here
\n
\n
ABSTRACT \nThe continuous monitoring of hu
man health can greatly benefit from devices that can be worn comfortably o
r seamlessly integrated in household objects\, constituting “health-center
ed” domotics. One of the key aspects for these devices to be successful is
to be invisibly integrated and disappear in the background of our lives.
Our group works on thin film devices made with plastic materials that can
be used for electrochemically sensing of common analytes from easily acces
sible bodily fluids (e.g. sweat\, saliva\, urine) and can be easily multip
lexed. I will describe electrochemical transistors that detect ionic speci
es either directly present in body fluids or resulting from a selective en
zymatic reaction (e.g. ammonia from creatinine) at physiological levels. I
will also show that non-charged molecules can be detected by making use o
f custom-processed polymer membranes that act as “synthetic enzymes”. Usin
g these membranes in conjunction with electrochemical transistors we demon
strate that we are able to measure physiological levels of cortisol in rea
l human sweat. Importantly\, transistors can amplify signals and I will sh
ow what architectures must be used to observe 1000x amplification of sensi
ng currents.
\n
Finally we have developed a process that allows us to
fabricate sensor arrays on flexible substrates thereby opening the door t
owards ultra-thin\, flexible sensor arrays for wearable technologies.
\n
\n
ABOUT \nAlberto Salleo is currently F
ull Professor of Materials Science and Department Chair at Stanford Univer
sity. Alberto Salleo holds a Laurea degree in Chemistry from
La Sapienza and graduated as a Fulbright Fellow with a PhD in Materia
ls Science from UC Berkeley in 2001. From 2001 to 2005 Salleo was first po
st-doctoral research fellow and successively member of research staff at X
erox Palo Alto Research Center. In 2005 Salleo joined the Materials Scienc
e and Engineering Department at Stanford as an Assistant Professor in 2006
. Salleo is a Principal Editor of MRS Communications since 2011.While at S
tanford\, Salleo won the NSF Career Award\, the 3M Untenured Faculty Award
\, the SPIE Early Career Award\, the Tau Beta Pi Excellence in Undergradua
te Teaching Award\, and the Gores Award for Excellence in Teaching\, Stanf
ord’s highest teaching award. He has been a Thomson Reuters Highly Cit
ed Researcher since 2015\, recognizing that he ranks in the top 1% ci
ted researchers in his field.
\n
\n
Hosted by: Garry Gold\
, M.D. \nSponsored by the PHIND Center and the Department of
Radiology
DTSTART;TZID=America/Los_Angeles:20210720T110000
DTEND;TZID=America/Los_Angeles:20210720T120000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:PHIND Seminar – Alberto Salleo\, Ph.D.
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/phind-se
minar-alberto-salleo-ph-d/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/alberto-salleo_profilephoto-150x150.jp
g\;150\;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radca
lendar/wp-content/uploads/2019/10/alberto-salleo_profilephoto-300x300.jpg\
;300\;300\;1\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalen
dar/wp-content/uploads/2019/10/alberto-salleo_profilephoto.jpg\;350\;350\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/2816249009305/WN_lU
ezgp98RMKzD7rC6oeRFg
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2809@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI\,Annual Conferences
CONTACT:AIMI Center\; aimicenter@stanford.edu\; https://aimi.stanford.edu/n
ews-events/aimi-symposium/overview
DESCRIPTION:
Stanford AIMI Director Curt Langlotz and Co-Directors
Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford C
enter for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on
the latest\, best research on the role of AI in diagnostic excellence acro
ss medicine\, current areas of impact\, fairness and societal impact\, and
translation and clinical implementation. The program includes talks\, int
eractive panel discussions\, and breakout sessions. Registration is free a
nd open to all.
\n
\n
Also\, the 2nd Annual BiOethics\, the Law\, and Data-sharing: AI in Radiology (BOLD-AI
R) Summit will be held on August 4\,
in conjunction with the AIMI Symposium. The summit will convene a broad r
ange of speakers in bioethics\, law\, regulation\, industry groups\, and p
atient safety and data privacy\, to address the latest ethical\, regulator
y\, and legal challenges regarding AI in radiology.
There is
a growing population of over 10 million Americans that live with an elevat
ed risk of having a stroke.
\n
Each year approximately 1 million Amer
icans survive a stroke or a ministroke\, often severely affected by its de
bilitating effects. A more disabling stroke frequently occurs after the se
minal events\, leaving patients and their families scarred for life.
\n
TIME = BRAIN. Early hospital presentation is the most critical determin
ant in good stroke outcomes. However\, most patients arrive at the hospita
l often hours after the event\, with less than 10% receiving any form of t
reatment (thrombolysis / thrombectomy).
\n
As a result\, at risk indi
viduals struggle daily with the fear\, a stroke might happen during night-
time or when they are alone. Unfortunately a stroke that goes unnoticed fo
r hours\, is most often not treatable due to the lack of salvageable tissu
e.
\n
To alleviate that fear\, we are creating an AI-powered\, smart-
headband that analyzes brain waves to detect the onset of an event immedia
tely\, and alert the patient\, caregivers and 911.
\n
Our stroke dete
ction AI has already been shown to detect ischemia during high-risk surger
ies with 90% sensitivity and no false positives.
\n
We have received
FDA breakthrough designation for our solution and are currently running a
pilot human factors and signal quality study.
\n
Our vision is to pro
vide peace of mind and optimal brain health for everyone.
\n
\n<
p>ABOUT \nOrestis is the CEO and Co-founder of Zeit
Medical\, a telehealth company that offers at home monitoring and alert so
lutions for patients at risk for stroke. Prior to starting Zeit\, Orestis
was a Stanford Biodesign Innovation Fellow where his team developed the in
itial idea about at-home stroke detection. Orestis trained as a Mechanical
Engineer\, at Aristotle University\, Greece\, earned his PhD in Biotechno
logy and Bioengineering at EPFL\, Switzerland and conducted cutting edge r
esearch in flexible wearable electronics with the Bao Group at Stanford Ch
emical Engineering. He has authored more than twenty publications in prest
igious journals and has filed for a variety of patents at the intersection
of materials technology and medical devices. Orestis currently lives in S
an Francisco\, where he also contributes to the UCSF-Stanford pediatric de
vice consortium as a technology advisor. He also maintains close ties wit
h the med-tech and health-tech communities in Switzerland and Greece\, con
tributing to regional Biodesign educational workshops.\n
\n
<
em>Hosted by: Garry Gold\, M.D. \nSponsored by the PHIND Cen
ter and the Department of Radiology
Positron emission tomography (PET) allows for sensitive and quantitative m
easurement of physiology\, metabolism and molecular targets noninvasively
in the human body. However\, typical clinical PET scanners capture less t
han 1% of the available signal produced in the body. PET scanners also ar
e not currently capable of precisely determining the location at which a p
articular decay occurs. These limitations present opportunities for furthe
r innovation that ultimately will impact molecular imaging research and di
agnostic imaging with PET. This presentation focuses on 1) total-body PET
imaging which greatly improves signal collection\, allowing radiotracer k
inetics to be assessed across the entire human body for the first time\, a
nd 2) the development of detector technologies that have a timing precisio
n of ~ 30 picoseconds\, enabling direct localization of radiotracer decays
without tomographic reconstruction.
\n
\n
BIO
\n
Simon R. Cherry\, Ph.D. received his B.Sc.(Hons) in Physics wit
h Astronomy from University College London in 1986 and a Ph.D. in Medical
Physics from the Institute of Cancer Research\, University of London in 19
89. After a postdoctoral fellowship at UCLA\, he joined the faculty in th
e Department of Molecular and Medical Pharmacology\, also at UCLA\, in 199
3. In 2001\, Dr. Cherry joined UC Davis and established the Center for Mol
ecular and Genomic Imaging\, which he directed from 2004-2016. Currently D
r. Cherry is Distinguished Professor in the Departments of Biomedical Engi
neering and Radiology at UC Davis.
\n
Dr. Cherry’s research interests
center around biomedical imaging and in particular the development and ap
plication of in vivo molecular imaging systems. His major accomplishments
have been in developing systems for positron emission tomography (PET)\,
in particular the invention of the microPET technology that was subsequent
ly widely adopted in academia and industry and as co-leader of the EXPLORE
R consortium which has developed the world’s first total-body PET scanner.
He also has contributed to detector technology innovations for PET\, con
ducted early biomedical studies using Cerenkov luminescence\, and develope
d the first proof-of-concept hybrid PET/MRI (magnetic resonance imaging) s
ystems.
\n
Dr. Cherry is a founding member of the Society of Molecula
r Imaging and an elected fellow of six professional societies\, including
the Institute for Electronic and Electrical Engineers (IEEE) and the Biome
dical Engineering Society (BMES). He served as Editor-in-Chief of the jour
nal Physics in Medicine and Biology from 2011-2020. Dr. Cherry received th
e Academy of Molecular Imaging Distinguished Basic Scientist Award (2007)\
, the Society for Molecular Imaging Achievement Award (2011) and the IEEE
Marie Sklodowska-Curie Award (2016). In 2016\, he was elected as a membe
r of the National Academy of Engineering and in 2017 he was elected to the
National Academy of Inventors. Dr. Cherry is the author of more than 240
peer-reviewed journal articles\, review articles and book chapters in the
field of biomedical imaging. He is also lead author of the widely-used te
xtbook “Physics in Nuclear Medicine”.
DTSTART;TZID=America/Los_Angeles:20210910T120000
DTEND;TZID=America/Los_Angeles:20210910T130000
LOCATION:LKSC 101/102 & Zoom - See Description for Zoom Link @ 291 Campus D
rive\, Stanford\, CA 94305
SEQUENCE:0
SUMMARY:CME Grand Rounds Sanjiv Sam Gambhir Lectureship – Simon Cherry\, Ph
D
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/cme-gran
d-rounds-sanjiv-sam-gambhir-lectureship-simon-cherry-phd/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/07/simon_cherry_website-150x150.jpg\;150\
;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/
wp-content/uploads/2021/07/simon_cherry_website-269x300.jpg\;269\;300\;1\,
large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content
/uploads/2021/07/simon_cherry_website.jpg\;520\;580\;
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-1645@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:PHIND\,PHIND Seminar Series
CONTACT:Ashley Williams\; ashleylw@stanford.edu
DESCRIPTION:
PHIND Seminar Series: Towards precision diagnostic and prediction
of food allergy
\n
Sindy KY Tang\, Ph.D. \nAssociate Profess
or of Mechanical Engineering\, Senior Fellow at the Woods Institute for th
e Environment and Professor\, by courtesy\, of Radiology – PHIND Center \nStanford University
11:00am – 1
2:00pm Seminar & Discussion \nRSVP Here
\n
\n
ABSTRACT \nFood allergy has reached epide
mic proportions. Accurate in vitro methods that are efficient and easy to
use to identify offending food allergens are lacking. Oral food challenge\
, the gold standard for food allergy assessment\, is often not performed a
s it places the patient at risk of anaphylaxis. As such\, food allergy is
often identified only after an adverse reaction that could be life-threate
ning. Our long-term goal is to develop a food allergy diagnostic test that
is accurate\, safe\, rapid\, and accessible\, so that food allergy can be
easily identified prior to the occurrence of an adverse reaction\, and th
at the efficacy of immunotherapy for food allergy can be tracked more effe
ctively. This talk will discuss our recent work on developing such a test.
Our approach is based on the Basophil Activation Test (BAT)\, which measu
res the activation of basophils in whole blood after stimulation with spec
ific food allergens ex vivo. The BAT has been shown to be highly predictiv
e of allergic reactions. However\, the need for flow cytometry has limited
its broader use. We are developing a miniaturized\, standalone version of
the BAT. We envision that the test can be used at the point of care\, suc
h as the doctor’s office or at a local pharmacy.
\n
\n
ABOUT \nProf. Sindy KY Tang is the Kenneth and Barbara Oshm
an Faculty Scholar and Associate Professor of Mechanical Engineering and b
y courtesy of Radiology (Precision Health and Integrated Diagnostics) at S
tanford University. She received her Ph.D. from Harvard University in Engi
neering Sciences under the supervision of Prof. George Whitesides. Her lab
at Stanford works on the fundamental understanding of fluid mechanics and
mass transport in micro-nano systems\, and the application of this knowle
dge towards problems in biology\, rapid diagnostics for health and environ
mental sustainability. The current areas of focus include the flow physics
of confined micro-droplets using experimental and machine learning method
s\, interfacial mass transport and self-assembly\, and ultrahigh throughpu
t opto-microfluidic systems for disease diagnostics\, water and energy sus
tainability\, and single-cell wound healing studies. She was a Stanford Bi
odesign Faculty Fellow in 2018. Dr. Tang’s work has been recognized by mul
tiple awards including the NSF CAREER Award\, 3M Nontenured Faculty Award\
, the ACS Petroleum Fund New Investigator Award\, and invited lecture at t
he Nobel Symposium on Microfluidics in Sweden. Website: http://web.stanford.edu/group/tanglab/<
/p>\n
\n
Hosted by: Garry Gold\, M.D. \nSponsor
ed by the PHIND Center and the Department of Radiology
DTSTART;TZID=America/Los_Angeles:20210921T110000
DTEND;TZID=America/Los_Angeles:20210921T120000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:PHIND Seminar – Sindy KY Tang\, Ph.D.
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/phind-se
minar-sindy-ky-tang-ph-d/
X-COST-TYPE:external
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2019/10/sindy-tang_profilephoto-150x150.jpg\;1
50\;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalend
ar/wp-content/uploads/2019/10/sindy-tang_profilephoto-300x300.jpg\;300\;30
0\;1\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-c
ontent/uploads/2019/10/sindy-tang_profilephoto.jpg\;350\;350\;
X-TICKETS-URL:https://stanford.zoom.us/webinar/register/1216286302579/WN_3i
FMsumAT9iKlV5G1Vr9zA
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2989@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; http://ibiis.stan
ford.edu/events/seminars/2021seminars.html
DESCRIPTION:
\n\n
Regina Barzilay\, PhD \nScho
ol of Engineering Distinguished Professor for AI and Health \nElectri
cal Engineering and Computer Science Department \nAI Faculty Lead at
Jameel Clinic for Machine Learning in Health \nComputer Science and A
rtificial Intelligence Lab \nMassachusetts Institute of Technology
\n
Abstract: \nIn this talk\, I will present meth
ods for future cancer risk from medical images. The discussion will explor
e alternative ways to formulate the risk assessment task and focus on algo
rithmic issues in developing such models. I will also discuss our experien
ce in translating these algorithms into clinical practice in hospitals aro
und the world.
DTSTART;TZID=America/Los_Angeles:20210922T110000
DTEND;TZID=America/Los_Angeles:20210922T120000
LOCATION:Zoom: https://stanford.zoom.us/j/99474772502?pwd=NEQrQUQ0MzdtRjFiY
U42TCs2bFZsUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Seeing the Future from Images: ML-Based Model
s for Cancer Risk Assessment
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-seeing-the-future-from-images-ml-based-models-for-cancer-risk-a
ssessment/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/08/regina-300x300.jpeg\;300\;300\,medium\
;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploa
ds/2021/08/regina-300x300.jpeg\;300\;300\,large\;http://web.stanford.edu/g
roup/radweb/cgi-bin/radcalendar/wp-content/uploads/2021/08/regina-300x300.
jpeg\;300\;300\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radcale
ndar/wp-content/uploads/2021/08/regina-300x300.jpeg\;300\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2885@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:CME\,CME Radiology Grand Rounds\,Radiology
CONTACT:Tricia Hatcliff\; 650-498-7359\; thatcliff@stanford.edu
DESCRIPTION:
CME Grand Rounds D
iversity Lectureship – Topic: TBD
\n
\n
Jenni
fer L. Eberhardt\, PhD \nProfessor \nPsychology \n
Stanford University
DTSTART;TZID=America/Los_Angeles:20210924T120000
DTEND;TZID=America/Los_Angeles:20210924T130000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:CME Grand Rounds Diversity Lectureship – Jennifer L. Eberhardt\, Ph
D
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/cme-gran
d-rounds-diversity-lectureship-jennifer-l-eberhardt-phd/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/07/download-150x150.jpg\;150\;150\;1\,med
ium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/u
ploads/2021/07/download.jpg\;214\;236\;
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-2993@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/retreat/2021Hybrid.html
DESCRIPTION:
Keynote:
\n
Self-Supervision for Learning from the Bot
tom Up
\n
Why do self-supervised learning? A common answer is: “beca
use data labeling is expensive.” In this talk\, I will argue that there ar
e other\, perhaps more fundamental reasons for working on self-supervision
. First\, it should allow us to get away from the tyranny of top-down sema
ntic categorization and force meaningful associations to emerge naturally
from the raw sensor data in a bottom-up fashion. Second\, it should allow
us to ditch fixed datasets and enable continuous\, online learning\, which
is a much more natural setting for real-world agents. Third\, and most in
triguingly\, there is hope that it might be possible to force a self-super
vised task curriculum to emerge from first principles\, even in the absenc
e of a pre-defined downstream task or goal\, similar to evolution. In this
talk\, I will touch upon these themes to argue that\, far from running it
s course\, research in self-supervised learning is only just beginning.
DTSTART;TZID=America/Los_Angeles:20211112T120000
DTEND;TZID=America/Los_Angeles:20211112T130000
LOCATION:Zoom - See Description for Zoom Link
SEQUENCE:0
SUMMARY:CME Grand Rounds – Michael Gisondi\, MD
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/cme-gran
d-rounds-michael-gisondi-md/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2021/07/michael-gisondi_profilephoto-150x150.j
pg\;150\;150\;1\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radc
alendar/wp-content/uploads/2021/07/michael-gisondi_profilephoto-300x300.jp
g\;300\;300\;1\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcal
endar/wp-content/uploads/2021/07/michael-gisondi_profilephoto.jpg\;350\;35
0\;
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-1703@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:PHIND\,PHIND Seminar Series
CONTACT:Ashley Williams\; ashleylw@stanford.edu
DESCRIPTION:
PHIND Seminar Series: Male Infertility and the Future Risk of Vascul
ar and CV Disease
\n
Michael Eisenberg\, M.D. \nAssociate P
rofessor of Urology and\, by courtesy\, of Obstetrics and Gynecology
\nStanford University Medical Center
\n
\n
Gary M. Shaw\, Ph.D.\nNICU Nurses Professor and Professor\, by courtesy\, of Health Resear
ch and Policy (Epidemiology) and of Obstetrics and Gynecology (Maternal Fe
tal Medicine) \nStanford University
Saeed Hassanpour\, PhD \nAssociate
Professor of Biomedical Data Science \nAssociate Professor of Epidem
iology \nAssociate Professor of Computer Science \nDartmouth Gei
sel School of Medicine
\n
Deep Learning for Histology Images
Analysis
\n
Abstract: \nWith the recen
t expansions of whole-slide digital scanning\, archiving\, and high-throug
hput tissue banks\, the field of digital pathology is primed to benefit si
gnificantly from deep learning technology. This talk will cover several ap
plications of deep learning for characterizing histopathological patterns
on high-resolution microscopy images for cancerous and precancerous lesion
s. Furthermore\, the current challenges for building deep learning models
for pathology image analysis will be discussed and new methodological adva
nces to address these bottlenecks will be presented.
\n
About
:
\n
Dr. Saeed Hassanpour is an Associate Professor in the D
epartments of Biomedical Data Science\, Computer Science\, and Epidemiolog
y at Dartmouth College. His research is focused on machine learning and mu
ltimodal data analysis for precision health. Dr. Hassanpour has led multip
le NIH-funded research projects\, which resulted in novel machine learning
and deep learning models for medical image analysis and clinical text min
ing to improve diagnosis\, prognosis\, and personalized therapies. Before
joining Dartmouth\, he worked as a Research Engineer at Microsoft. Dr. Has
sanpour received his Ph.D. in Electrical Engineering with a minor in Biome
dical Informatics from Stanford University and completed his postdoctoral
training at Stanford Center for Artificial Intelligence in Medicine & Imag
ing.
Indrani Bhattacharya\, PhD \nPostdoctoral Research Fellow \nDepartment of Radiology \nStanfo
rd University
\n
Title: Multimodal Data Fusion for S
elective Identification of Aggressive and Indolent Prostate Cancer on Magn
etic Resonance Imaging
\n
Abstract: Automated method
s for detecting prostate cancer and distinguishing indolent from aggressiv
e disease on Magnetic Resonance Imaging (MRI) could assist in early diagno
sis and treatment planning. Existing automated methods of prostate cancer
detection mostly rely on ground truth labels with limited accuracy\, ignor
e disease pathology characteristics observed on resected tissue\, and cann
ot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleas
on Pattern=3) cancers when they co-exist in mixed lesions. This talk will
cover multimodal and multi-scale fusion approaches to integrate radiology
images\, pathology images\, and clinical domain knowledge about prostate c
ancer distribution to selectively identify and localize aggressive and ind
olent cancers on prostate MRI.
\n\n
Rogier van der Sluijs\, PhD \nPostd
octoral Research Fellow \nDepartment of Radiology \nStanford Uni
versity
\n
Title: Pretraining Neural Networks for Me
dical AI
\n
Abstract: Transfer learning has quickly
become standard practice for deep learning on medical images. Typically\,
practitioners repurpose existing neural networks and their corresponding w
eights to bootstrap model development. This talk will cover several method
s to pretrain neural networks for medical tasks. The current challenges fo
r pretraining neural networks in Radiology will be discussed and recent ad
vancements that address these bottlenecks will be highlighted.
Nina Kottler\, MD
\, MS \nAssociate Chief Medical Officer\, Clinical AI \nVP C
linical Operations \nRadiology Partners
\n
Abstract: \nWe have a call to action in healthcare – we need to drive val
ue. Artificial intelligence (AI)\, if deployed correctly\, can help accom
plish this lofty mission. In this discussion we will review the following
lessons learned in deploying radiology AI at scale: 4 unexpected benefit
s of implementing AI emergent finding triage\; the importance of investing
in AI radiologist education\; how “most” AI needs to be incorporated into
the radiologist workflow\; why a platform is required to deploy AI at sca
le and what a modern platform looks like\; how to use AI to add value to y
our data\; and\, as Dr. Curt Langlotz famously said\, why rads (practices)
who use AI will replace those who don’t (a depiction of what the role of
the radiologist might look like in a tech enabled future).
\n
Bio: \nDr. Kottler has been a practicing radiologist specia
lizing in emergency imaging for over 16 years. Combining her clinical exp
erience with a graduate degree in applied mathematics\, she has been using
technological innovation to drive value in radiology. As the first radio
logist to join Radiology Partners\, Dr. Kottler has held multiple leadersh
ip positions within her practice and is currently the associate Chief Medi
cal Officer for Clinical AI. Externally Dr. Kottler serves on multiple co
mmittees for the ACR\, RSNA\, and SIIM. Dr. Kottler is also passionate ab
out promoting diversity and creating a culture of belonging. As such she
is a member of the AAWR\, is a member of the diversity and inclusion commi
ttee at SIIM\, serves on the steering committee for RAD=\, and leads the e
ducation and development division of the Belonging Committee within Radiol
ogy Partners.
Spyridon (Spyros) Bakas\,
PhD \nAssistant Professor in the Department of Pathology\,
\nLaboratory Medicine\, and of Radiology \nCenter for Biomedical Imag
e Computing and Analytics (CBICA) \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Imaging Analytics for N
euro-Oncology: \nTowards Computational Diagnostics
\n
Ab
stract: Central nervous system (CNS) tumors come with vastly hete
rogeneous histologic\, molecular\, and radiographic landscapes\, rendering
their precise characterization challenging. The rapidly growing fields of
biophysical modeling and radiomics have shown promise in better character
izing the molecular\, spatial\, and temporal heterogeneity of tumors. Inte
grative analysis of CNS tumors\, including clinically acquired multi-param
etric magnetic resonance imaging (mpMRI)\, assists in identifying macrosco
pic quantifiable tumor patterns of invasion and proliferation\, potentiall
y leading to improved (a) detection/segmentation of tumor subregions and (
b) computer-aided diagnostic/prognostic/predictive modeling. This talk wil
l touch upon example studies on this space\, as well as an overview of the
largest to-date real-world federated learning study to detect brain tumor
boundaries.
Harini Veeraraghavan\, PhD \nAssociat
e Attending Computer Scientist \nDepartment of Medical Physics
\nMemorial Sloan-Kettering Cancer Center
\n
Using AI for Long
itudinal Tumor Response Monitoring and AI Guided Cancer Treatments: From L
ab to Clinic
\n
Abstract: \nCancer pat
ients are imaged with multiple imaging modalities as part of routine cance
r care. However\, the rich information available from the images are not f
ully exploited to better manage patient care through earlier intervention
as well as more precise targeted treatments. In this talk\, I will present
some of the new AI methodologies we have been developing to track tumor r
esponse as well as from routinely acquired imaging applied to image-guided
radiation treatments using CT/cone-beam CT as well as MRI-guided precisio
n treatments. I will also present some demonstration studies of how AI bas
ed automated segmentation and tumor as well as healthy tissue change asses
sment can be used to early detect treatment toxicities to enable clinician
s to better manage cancer care. Finally\, I will show how these developed
methods have been put to routine clinical care for automating radiotherapy
treatment planning at MSK.
DTSTART;TZID=America/Los_Angeles:20220316T120000
DTEND;TZID=America/Los_Angeles:20220316T130000
LOCATION:ZOOM: https://stanford.zoom.us/j/99319571697?pwd=c2lhRkN4cXEzTzFzM
UhKaTVJMHZLQT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Using AI for Longitudinal Tumor Response Moni
toring and AI Guided Cancer Treatments: From Lab to Clinic
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-using-ai-for-longitudinal-tumor-response-monitoring-and-ai-guid
ed-cancer-treatments-from-lab-to-clinic/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;
200\;200\,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar
/wp-content/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200
\,large\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-conte
nt/uploads/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200\,full\;h
ttp://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads
/2022/03/harini-veeraraghavan_15_1200x800.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3071@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; rtotah@stanford.edu\; https://ibiis.stanford.edu/even
ts/seminars/2022seminars.html
DESCRIPTION:\n
Spyridon (Spyros) Bakas\,
PhD \nAssistant Professor in the Department of Pathology\,
\nLaboratory Medicine\, and of Radiology \nCenter for Biomedical Imag
e Computing and Analytics (CBICA) \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Imaging Analytics for N
euro-Oncology: Towards Computational Diagnostics
\n
Central nervous s
ystem (CNS) tumors come with vastly heterogeneous histologic\, molecular\,
and radiographic landscapes\, rendering their precise characterization ch
allenging. The rapidly growing fields of biophysical modeling and radiomic
s have shown promise in better characterizing the molecular\, spatial\, an
d temporal heterogeneity of tumors. Integrative analysis of CNS tumors\, i
ncluding clinically acquired multi-parametric magnetic resonance imaging (
mpMRI)\, assists in identifying macroscopic quantifiable tumor patterns of
invasion and proliferation\, potentially leading to improved (a) detectio
n/segmentation of tumor subregions and (b) computer-aided diagnostic/progn
ostic/predictive modeling. This talk will touch upon example studies on th
is space\, as well as an overview of the largest to-date real-world federa
ted learning study to detect brain tumor boundaries.
Daniel Marcus\, PhD
\nProfessor of Radiology \nDirector of the Neuroinformatics Research
Group \nDirector of the Neuroimaging Informatics and Analysis Center<
br />\nWashington University
\n
Abstract: \nDeveloping and deplo
ying computational tools for neuro-oncology applications includes a sequen
ce of complex steps to identify appropriate images\, assess image quality\
, annotate\, process and other prepare and manipulate data for analysis. W
e have implemented services and tools on the open source XNAT informatics
platform to automate much of this workflow to improve both its efficiency
and effectiveness. Dr. Marcus will discuss this automated workflow and its
implementation in a number of data sets and applications at Washington Un
iversity.
Lena Maier-Hein\, PhD \nHead of Department\, Computer As
sisted Medical Interventions \nManaging Director\, Data Science and D
igital Oncology \nManaging Director\, National Center for Tumor Disea
ses \nGerman Cancer Research Center
\n
Title: Missing the
(Bench)mark?
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
Abstract
\n
\n
\n
\n
\n
\n
\n
\n
Machine
learning has begun to revolutionize almost all areas of health research. S
uccess stories cover a wide variety of application fields ranging from rad
iology and gastroenterology all the way to mental health. Strikingly\, how
ever\, solutions that perform favorably in research generally do not trans
late well to clinical practice\, and little attention is being given to le
arning from failures. Focusing on biomedical image analysis as a key area
of health-related machine learning\, this talk will present pitfalls\, cav
eats and recommendations related to machine learning-based biomedical imag
e analysis. As a particular highlight\, it will cover yet unpublished work
on two key research questions related to biomedical image analysis compet
itions: 1) How can we best select performance metrics according to the cha
racteristics of the driving biomedical question? And 2) Why is the winner
the best? The results have been compiled based on the input of hundreds of
image analysis researchers worldwide.
\nLauren Oa
kden-Rayner\, PhD \nDirector of Research in Medical Imaging
\nRoyal Adelaide Hospital \nSenior Research Fellow \nAustralian
Institute for Machine Learning
\n
Title: Medical AI
Safety – A Clinical Perspective
\n
Abstract: \nMedical ar
tificial intelligence is rapidly moving into clinics\, particularly in ima
ging-based specialties such as radiology. This transition is producing man
y new challenges\, as the regulatory environment has struggled to keep up
and AI training for healthcare workers is virtually non-existent. Dr. Oakd
en-Rayner will provide a clinical safety perspective on medical AI\, discu
ss a range of identified risks and potential harms\, and discuss possible
solutions to mitigate these risks as this exciting field continues to deve
lop.
\n
Bio: \nDr. Lauren Oakden-Rayner (FRANZC
R\, PhD) is the Director of Research in Medical Imaging at the Royal Adela
ide Hospital and is a senior research fellow at the Australian Institute f
or Machine Learning. Her research explores the safe translation of artific
ial intelligence technologies into clinical practice\, both from a technic
al and clinical perspective. \n
David Magnus\, PhD \nThomas A Raffin Professor of Medicine and B
iomedical Ethics and Professor of Pediatrics\, Medicine\, and by courtesy
of Bioengineering \nDirector\, Stanford Center for Biomedical Ethics<
br />\nAssociate Dean for Research \nStanford University
\n
T
itle: Ethical Challenges in the Application of AI to Healthcare
\n<
p>Abstract: \nThis presentation will focus on three issues. Fi
rst\, applying AI to healthcare requires access to large data sets. Data a
cquisition and data sharing raises a number of challenging ethical issues\
, including challenges to traditional understandings of informed consent\,
and importance of diversity and inclusion in data sources. Second\, I wil
l briefly discuss the widely discussed issues around justice and equity ra
ised by AI in healthcare. Finally\, I will discuss challenges with ethical
oversight and governance\, particularly in relation to research developme
nt of AI. IRB’s are prohibited from considering downstream social conseque
nces and harms to individuals other than research participants when evalua
ting the harms and risks of research. This gap needs to be filled\, partic
ularly as dual uses of AI models are now recognized as a problem.\n
Bio: \nDavid Magnus\, Ph
D is Thomas A. Raffin Professor of Medicine and Biomedical Ethics and Prof
essor of Pediatrics and Medicine and by Courtesy of Bioengineering at Stan
ford University\, where he is Director of the Stanford Center for Biomedic
al Ethics and an Associate Dean of Research. Magnus is member of the Ethic
s Committee for the Stanford Hospital. He is currently the Vice-Chair of t
he IRB for the NIH Precision Medicine Initiative (“All of Us”). He is the
former President of the Association of Bioethics Program Directors\, and i
s the Editor in Chief of the American Journal of Bioethics. He has publish
ed articles on a wide range of topics in bioethics\, including research et
hics\, genetics\, stem cell research\, organ transplantation\, end of life
\, and patient communication. He was a member of the Secretary of Agricult
ure’s Advisory Committee on Biotechnology in the 21st Century and currentl
y serves on the California Human Stem Cell Research Advisory Committee. He
is the principal editor of a collection of essays entitled “Who Owns Life
?” (2002) and his publications have appeared in New England Journal of Med
icine\, Science\, Nature Biotechnology\, and the British Medical Journal.
He has appeared on many radio and television shows including 60 Minutes\,
Good Morning America\, The Today Show\, CBS This Morning\, FOX news Sunday
\, and ABC World News and NPR. In addition to his scholarly work\, he has
published Opinion pieces in the Philadelphia Inquirer\, the Chicago Tribun
e\, the San Jose Mercury News\, and the New Jersey Star Ledger.
DTSTART;TZID=America/Los_Angeles:20220921T133000
DTEND;TZID=America/Los_Angeles:20220921T143000
LOCATION:ZOOM: https://stanford.zoom.us/j/99191454207?pwd=N0ZYWnh1Mks0UEluO
VRUZjdWNHZPUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Ethical Challenges in the Application of AI t
o Healthcare
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-ethical-challenges-in-the-application-of-ai-to-healthcare/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2022/09/david_magnus_ep_44_good.jpg\;200\;200\
,medium\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-conte
nt/uploads/2022/09/david_magnus_ep_44_good.jpg\;200\;200\,large\;http://we
b.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2022/09
/david_magnus_ep_44_good.jpg\;200\;200\,full\;http://web.stanford.edu/grou
p/radweb/cgi-bin/radcalendar/wp-content/uploads/2022/09/david_magnus_ep_44
_good.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3093@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2022seminars.html
DESCRIPTION:\n
Polina Golland\, PhD\nProfessor of Electrical Engineering and Computer Science \nPI i
n the Computer Science and Artificial Intelligence Laboratory \nMassa
chusetts Institute of Technology
\n
Title: Learning to Read X-
Ray: Applications to Heart Failure Monitoring
\n
Abstract: We
propose and demonstrate a novel approach to training image classification
models based on large collections of images with limited labels. We take a
dvantage of availability of radiology reports to construct joint multimoda
l embedding that serves as a basis for classification. We demonstrate the
advantages of this approach in application to assessment of pulmonary edem
a severity in congestive heart failure that motivated the development of t
he method.
Baris Turkbey\, MD\, FSAR \nSenior Cli
nician \nSection Chief of MRI \nSection Chief of Artificial Inte
lligence \nMolecular Imaging Branch \nNational Cancer Institute\
, NIH
\n
Title: Advanced Prostate Cancer Imaging
\n
Talk Objectives:
\n
\n
To discuss current status and
limitations of localized prostate cancer diagnosis.
\n
To discuss
use of artificial intelligence in diagnosis of localized prostate cancer.<
/li>\n
To discuss use of molecular imaging in clinical prostate cancer
management.
\n
\n
Bio: \nDr. Turkbey obtai
ned his medical degree from Hacettepe University in Ankara\, Turkey in 200
3. He completed his residency in Diagnostic and Interventional Radiology a
t Hacettepe University. He joined Molecular Imaging Branch (MIB)\, Nationa
l Cancer Institute\, NIH in 2007. His main research areas are imaging of p
rostate cancer (multiparametric MRI\, PET CT)\, image guided biopsy and tr
eatment techniques (focal therapy\, surgery and radiation therapy) for pro
state cancer and artificial intelligence. Dr. Turkbey is a member of Prost
ate Imaging Reporting & Data System (PI-RADS) Steering Committee. He is th
e Director Magnetic Resonance Imaging section in MIB and the Artificial In
telligence Resource in MIB.
In Person at the Clark Center S360 – Lunch will be p
rovided! \nZoom: https://stanford.zoom.us/j
/99496515255?pwd=MHlXbXM2WXJULzZwemk1WjJHNFZOdz09
\n\n
Anthony Gatti\, PhD \nPostdoctoral Research Fellow
\nDepartment of Radiology \nWu Tsai Human Performance Alliance<
br />\nStanford University
\n
\n
\n
\n
Title: Towards Understanding Knee Health Using Automated MR
I-Based Statistical Shape Models
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Abstract: Knee injuries and pain are prevalent across all age
s\, with varying causes from “anterior knee pain” in runners to osteoarthr
itis-related pain. Osteoarthritis pain is a particular problem because str
uctural outcomes assessed on medical images often disagree with symptoms.
Most studies trying to understand knee health and pain use simple biomarke
rs such as mean cartilage thickness. My talk will present an automated pip
eline for quantifying the whole knee using statistical shape modeling. I w
ill present a conventional statistical shape model as well as a novel appr
oach that uses generative neural implicit representations. Both modeling a
pproaches allow unsupervised identification of salient anatomic features.
I will demonstrate how these features can be used to predict existing radi
ographic outcomes\, patient demographics\, and knee pain.
\n\n
\n
\n
\n<
p>Liangqiong Qu\, PhD \nPostdoctoral Research Fellow \nDe
partment of Biomedical Data Sciences \nStanford University\n
Title: Distributed Deep Learning in Medical Imaging
\n
\n\n
\n
\n
\n
\n
\n
\n
\n
Abstract: Distributed deep
learning is an emerging research paradigm for enabling collaboratively tra
ining deep learning models without sharing patient data. \nIn this ta
lk\, we will first investigate the use distributed deep learning to build
medical imaging classification models in a real-world collaborative settin
g. \nWe then present several strategies to tackle the data heterogene
ity challenge and the lack of quality labeled data challenge in distribute
d deep learning.
Archana Venkataraman\, PhD \nAssociate Profes
sor of Electrical and Computer Engineering \nBoston University
\n<
p>Title: Biologically Inspired Deep Learning as a New Window into B
rain Dysfunction\n
Abstract: Deep learning has disr
upted nearly every major field of study from computer vision to genomics.
The unparalleled success of these models has\, in many cases\, been fueled
by an explosion of data. Millions of labeled images\, thousands of annota
ted ICU admissions\, and hundreds of hours of transcribed speech are commo
n standards in the literature. Clinical neuroscience is a notable holdout
to this trend. It is a field of unavoidably small datasets\, massive patie
nt variability\, and complex (largely unknown) phenomena. My lab tackles t
hese challenges across a spectrum of projects\, from answering foundationa
l neuroscientific questions to translational applications of neuroimaging
data to exploratory directions for probing neural circuitry. One of our ke
y strategies is to integrate a priori information about the brain a
nd biology into the model design.
\n
This talk will highlight two ong
oing projects that epitomize this strategy. First\, I will showcase an end
-to-end deep learning framework that fuses neuroimaging\, genetic\, and ph
enotypic data\, while maintaining interpretability of the extracted biomar
kers. We use a learnable dropout layer to extract a sparse subset of predi
ctive imaging features and a biologically informed deep network architectu
re for whole-genome analysis. Specifically\, the network uses hierarchical
graph convolution that mimic the organization of a well-established gene
ontology to track the convergence of genetic risk across biological pathwa
ys. Second\, I will present a deep-generative hybrid model for epileptic s
eizure detection from scalp EEG. The latent variables in this model captur
e the spatiotemporal spread of a seizure\; they are complemented by a nonp
arametric likelihood based on convolutional neural networks. I will also h
ighlight our current end-to-end extensions of this work focused on seizure
onset localization. Finally\, I will conclude with exciting future direct
ions for our work across the foundational\, translational\, and explorator
y axes.
DTSTART;TZID=America/Los_Angeles:20230118T120000
DTEND;TZID=America/Los_Angeles:20230118T130000
LOCATION:Zoom: https://stanford.zoom.us/j/96155849129?pwd=MTVtenF6RWdHMEwwd
EZoV3NhM0svUT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Zoom Seminar: Biologically Inspired Deep Learning as a
New Window into Brain Dysfunction
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-biologically-inspired-deep-learning-as-a-new-window-into-brain-
dysfunction/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/01/Picture1-298x300.jpg\;298\;300\,medium
\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uplo
ads/2023/01/Picture1-298x300.jpg\;298\;300\,large\;http://web.stanford.edu
/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023/01/Picture1-298x
300.jpg\;298\;300\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radc
alendar/wp-content/uploads/2023/01/Picture1-298x300.jpg\;298\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3120@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Andrew Janowczyk\, P
hD \nAssistant Professor \nDepartment of Biomedical Engineer
ing \nEmory University
\n
Title: Computational Pathology:
Towards Precision Medicine
\n
Abstract: \nRoughly 40% of
the population will be diagnosed with some form of cancer in their lifeti
me. In a large majority of these cases\, a definitive cancer diagnosis is
only possible via histopathologic confirmation on a tissue slide. With the
increasing popularity of the digitization of pathology slides\, a wealth
of new untapped data is now regularly being created.
\n
Computational
analysis of these routinely captured H&E slides is facilitating the creat
ion of diagnostic tools for tasks such as disease identification and gradi
ng. Further\, by identifying patterns of disease presentation across large
cohorts of retrospectively analyzed patients\, new insights for predictin
g prognosis and therapy response are possible [1\,2]. Such biomarkers\, de
rived from inexpensive histology slides\, stand to improve the standard of
care for all patient populations\, especially where expensive genomic tes
ting may not be readily available. Moreover\, since numerous other disease
s and disorders\, such as oncoming clinical heart failure [3]\, are simila
rly diagnosed via pathology slides\, those patients also stand to benefit
from these same technological advances in the digital pathology space.
\n
This talk will discuss our research aimed towards reaching the goal o
f precision medicine\, wherein patients receive optimized treatment based
on historical evidence. The talk discusses how the applications of deep le
arning in this domain are significantly improving the efficiency and robus
tness of these models [4]. Numerous challenges remain\, though\, especiall
y in the context of quality control and annotation gathering. This talk fu
rther introduces the audience to open-source tools being developed and dep
loyed to meet these pressing needs\, including quality control (histoqc.co
m [5])\, annotation (quickannotator.com)\, labeling (patchsorter.com)\, va
lidation (cohortfinder.com).
Meli
ssa McCradden\, PhD \nJohn and Melinda Thompson Director of Artif
icial Intelligence in Medicine \nIntegration Lead\, AI in Medicine In
itiative \nBioethicist\, The Hospital for Sick Children (SickKids) \nAssociate Scientist\, Genetics & Genome Biology \nAssistant Prof
essor\, Dalla Lana School of Public Health
\n
Title: What Make
s a ‘Good’ Decision? An Empirical Bioethics Study of Using AI at the Bedsi
de
\n
Abstract: This presentation will identify the gap betwee
n AI accuracy and making good clinical decisions. I will present a study w
here we develop an ethical framework for clinical decision-making that can
help clinicians meet medicolegal and ethical standards when using AI that
does not rely on explainability\, nor perfect accuracy of the model.
DTSTART;TZID=America/Los_Angeles:20230315T120000
DTEND;TZID=America/Los_Angeles:20230315T130000
LOCATION:https://stanford.zoom.us/j/96612401401?pwd=WFNJb2Q4dStoVDE5a25BYTB
kMjN4QT09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: What Makes a ‘Good’ Decision? An Empirical Bi
oethics Study of Using AI at the Bedside
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-what-makes-a-good-decision-an-empirical-bioethics-study-of-usin
g-ai-at-the-bedside/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.12.28-AM-
247x300.png\;247\;300\,medium\;http://web.stanford.edu/group/radweb/cgi-bi
n/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.12.2
8-AM-247x300.png\;247\;300\,large\;http://web.stanford.edu/group/radweb/cg
i-bin/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-10.
12.28-AM-247x300.png\;247\;300\,full\;http://web.stanford.edu/group/radweb
/cgi-bin/radcalendar/wp-content/uploads/2023/03/Screen-Shot-2023-03-06-at-
10.12.28-AM-247x300.png\;247\;300
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3134@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Marzyeh Ghassemi\, PhD<
/b> \nAssistant Professor\, Department of Electrical Engineering and
Computer Science \nInstitute for Medical Engineering & Science
\nMassachusetts Institute of Technology (MIT) \nCanadian CIFAR AI Cha
ir at Vector Institute
\n
Title: Designing Machine Learning Pr
ocesses For Equitable Health Systems
\n
Abstract \nDr. Marzyeh Ghassemi focuses on creating and applying machine learnin
g to understand and improve health in ways that are robust\, private and f
air. Dr. Ghassemi will talk about her work trying to train models that do
not learn biased rules or recommendations that harm minorities or minoriti
zed populations. The Healthy ML group tackles the many novel technical opp
ortunities for machine learning in health\, and works to make important pr
ogress with careful application to this domain.
Hoifung Poon\, PhD \nGeneral Manager at Health Futures o
f Microsoft Research \nAffiliated Professor at the University of Wash
ington Medical School.
\n
Title: Advancing Health at the Speed
of AI
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
Abstract: The dream of precision health is to develop a data-driven\, continuous
learning system where new health information is instantly incorporated to
optimize care delivery and accelerate biomedical discovery. In reality\,
however\, the health ecosystem is plagued by overwhelming unstructured dat
a and unscalable manual processing. Self-supervised AI such as large langu
age models (LLMs) can supercharge structuring of biomedical data and accel
erate transformation towards precision health. In this talk\, I’ll present
our research progress on biomedical AI for precision health\, spanning bi
omedical LLMs\, multi-modal learning\, and causal discovery. This enables
us to extract knowledge from tens of millions of publications\, structure
real-world data for millions of cancer patients\, and apply the extracted
knowledge and real-world evidence to advancing precision oncology in deep
partnerships with real-world stakeholders.
\n
\n
\n
\n
div>\n
\n
DTSTART;TZID=America/Los_Angeles:20230426T143000
DTEND;TZID=America/Los_Angeles:20230426T153000
LOCATION:LKSC 120 and remote via Zoom @ https://stanford.zoom.us/j/92666973
395?pwd=SHpzVmVPMEFYRXQ5Skp5eG1vcXBrdz09
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Advancing Health at the Speed of AI
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-advancing-health-at-the-speed-of-ai/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198\,medium
\;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uplo
ads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198\,large\;http://web.stanford.edu
/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023/04/Hoifung-Poon-
PhD.jpg\;200\;198\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radc
alendar/wp-content/uploads/2023/04/Hoifung-Poon-PhD.jpg\;200\;198
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3144@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Despina Kontos\, PhD \nMatthew J. W
ilson Professor of Research Radiology II \nAssociate Vice-Chair for R
esearch\, Department of Radiology \nPerelman School of Medicine
\nUniversity of Pennsylvania
\n
Title: Radiomics and Radiogeno
mics: The Role of Imaging\, Machine Learning\, and AI\, as a Biomarker for
Cancer Prognostication and Therapy Response Evaluation
\n
Abstrac
t: Cancer is a heterogeneous disease\, with known inter-tumor and intr
a-tumor heterogeneity in solid tumors. Established histopathologic prognos
tic biomarkers generally acquired from a tumor biopsy may be limited by sa
mpling variation. Radiomics is an emerging field with the potential to lev
erage the whole tumor via non-invasive sampling afforded by medical imagin
g to extract high throughput\, quantitative features for personalized tumo
r characterization. Identifying imaging phenotypes via radiomics analysis
and understanding their relationship with prognostic markers and patient o
utcomes can allow for a non-invasive assessment of tumor heterogeneity. Re
cent studies have shown that intrinsic radiomic phenotypes of tumor hetero
geneity for cancer may have independent prognostic value when predicting d
isease aggressiveness and recurrence. The independent prognostic value of
imaging heterogeneity phenotypes suggests that radiogenomic phenotypes can
provide a non-invasive characterization of tumor heterogeneity to augment
genomic assays in precision prognosis and treatment.
DTSTART;TZID=America/Los_Angeles:20230517T120000
DTEND;TZID=America/Los_Angeles:20230517T130000
LOCATION:Clark Center S360 - Zoom Details on IBIIS website @ 318 Campus Dri
ve
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar: Radiomics and Radiogenomics: The Role of Imag
ing\, Machine Learning\, and AI\, as a Biomarker for Cancer Prognosticatio
n and Therapy Response Evaluation
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-radiomics-and-radiogenomics-the-role-of-imaging-machine-learnin
g-and-ai-as-a-biomarker-for-cancer-prognostication-and-therapy-response-ev
aluation/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2023/05/kont4311.jpg\;200\;200\,medium\;http:/
/web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploads/2023
/05/kont4311.jpg\;200\;200\,large\;http://web.stanford.edu/group/radweb/cg
i-bin/radcalendar/wp-content/uploads/2023/05/kont4311.jpg\;200\;200\,full\
;http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/uploa
ds/2023/05/kont4311.jpg\;200\;200
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-3150@web.stanford.edu/group/radweb/cgi-bin/radcalendar
DTSTAMP:20240330T045922Z
CATEGORIES;LANGUAGE=en-US:AIMI
CONTACT:Ramzi Totah\; 16507214161\; rtotah@stanford.edu\; https://ibiis.sta
nford.edu/events/seminars/2023seminars.html
DESCRIPTION:\n
Daguang Xu\, PhD \nSenior Research Manager \nNVIDIA Healthcare
\n
Title: Industrial Applied Research i
n Healthcare and Federated Learning at NVIDIA
\n
Abstract: As
the market leader in deep learning and parallel computing\, NVIDIA is full
y committed to advancing applied research in medical imaging. Our goal is
to revolutionize the capabilities of medical doctors and radiologists by e
quipping them with powerful tools and applications based on deep learning.
We firmly believe that the integration of deep learning and accelerated A
I will have a profound impact on the life sciences\, medicine\, and the he
althcare industry as a whole. To drive this transformative process\, NVIDI
A is actively democratizing deep learning through the provision of a compr
ehensive AI computing platform specifically designed for the healthcare co
mmunity. These GPU-accelerated solutions not only promote collaboration bu
t also prioritize the security of each institution’s information. By doing
so\, we are fostering a collective effort in harnessing the potential of
deep learning to benefit healthcare.
\n
During this talk\, I will sho
wcase remarkable research achievements accomplished by NVIDIA’s deep learn
ing in medical imaging team. This includes breakthroughs in segmentation\,
self-supervised learning\, federated learning\, and other related areas.
Additionally\, I will provide insights into the exciting avenues of resear
ch that our team is currently exploring.
Negar Golestani\, PhD \nPostdoctoral Research Fellow\nDepartment of Radiology \nStanford University
\n
\n
Title: AI in Radiology-Pathology Fusion Towards Precise Breast Cancer Detection<
/p>\n
Abstract: Breast cancer is a global public health concern with various treatmen
t options based on tumor characteristics. Pathological examination of exci
sed tissue after surgery provides important information for treatment deci
sions. This pathology processing involving the manual selection of represe
ntative sections for histological examination is time-consuming and subjec
tive\, which can lead to potential sampling errors. Accurately identifying
residual tumors is a challenging task\, which highlights the need for sys
tematic or assisted methods. Radiology-pathology registration is essential
for developing deep-learning algorithms to automate cancer detection on r
adiology images. However\, aligning faxitron and histopathology images is
difficult due to content and resolution differences\, tissue deformation\,
artifacts\, and imprecise correspondence. We propose a novel deep learnin
g-based pipeline for affine registration of faxitron images (x-ray represe
ntations of macrosections of ex-vivo breast tissue) with their correspondi
ng histopathology images. Our model combines convolutional neural networks
(CNN) and vision transformers (ViT)\, capturing local and global informat
ion from the entire tissue macrosection and its segments. This integrated
approach enables simultaneous registration and stitching of image segments
\, facilitating segment-to-macrosection registration through a puzzling-ba
sed mechanism. To overcome the limitations of multi-modal ground truth dat
a\, we train the model using synthetic mono-modal data in a weakly supervi
sed manner. The trained model successfully performs multi-modal registrati
on\, outperforms existing baselines\, including deep learning-based and it
erative models\, and is approximately 200 times faster than the iterative
approach. The application of proposed registration method allows for the p
recise mapping of pathology labels onto radiology images\, thereby establi
shing ground truth labels for training classification and detection models
on radiological data. This work bridges the gap in current research and c
linical workflow\, offering potential improvements in efficiency and accur
acy for breast cancer evaluation and streamlining pathology workflow.
\n
\n
Jean Benoit Delbrouck\, PhD \nResearch Scientist \nDepartment of Radiology \nStanford Unive
rsity \n
\n
<
strong>Title: Generating Accurate and Factually Correct Medical T
ext \nAbstract: Generating factually correct medical
text is of utmost importance due to several reasons. Firstly\, patient sa
fety is heavily dependent on accurate information as medical decisions are
often made based on the information provided. Secondly\, trust in AI as a
reliable tool in the medical field is essential\, and this trust can only
be established by generating accurate and reliable medical text. Lastly\,
medical research also relies heavily on accurate information for meaningf
ul results.
\n
Recent studies have explore
d new approaches for generating medical text from images or findings\, ran
ging from pretraining to Reinforcement Learning\, and leveraging expert an
notations. However\, a potential game changer in the field is the integrat
ion of GPT models in pipelines for generating factually correct medical te
xt for research or production purposes.
Bram van G
inneken\, PhD \nProfessor of Medical Image Analysis \nChair
of the Diagnostic Image Analysis Group \nRadboud University Medical C
enter
\n
Title: Why AI Should Replace Radiologists
\n
Abstract: \nIn this talk\, I will provide arguments for the thesi
s that nearly all diagnostic radiology could be performed by computers and
that the notion that AI will not replace radiologists is only temporarily
true. Some well-known and lesser-known examples of AI systems analyzing m
edical images with a stand-alone performance on par or beyond human expert
s will be presented. I will show that systems built by academia\, in colla
borative efforts\, may even outperform commercially available systems. Nex
t\, I will sketch a way forward to implement automated diagnostic radiolog
y and argue that this is needed to keep healthcare affordable in societie
s wrestling with aging populations. Some pitfalls\, like excessive demands
for trials\, will be discussed. The key to success is to convince radiolo
gists to take the lead in this process. They need to collaborate with AI d
evelopers\, but AI developers and the medical device industry should not l
ead this process. Radiologists should\, in fact\, stop training radiologis
ts\, and instead\, start training machines.
Andrey Fedorov\, PhD <
br />\nAssociate Professor\, Harvard Medical School \nLead Investigat
or\, Brigham and Women’s Hospital
\n
\n<
div class='adaptiveimage text-image row'>\n
\n
Title: NCI Imaging Data Commons:Towards Transparenc
y\, Reproducibility\, and Scalability in Imaging AI
\n
\n
\n
\n
\n
\n
\n
\n
\n<
div class='tab-text'>\n
Abstract \nThe re
markable advances of artificial intelligence (AI) technology are revolutio
nizing established approaches to the acquisition\, interpretation\, and an
alysis of biomedical imaging data. Development\, validation\, and continuo
us refinement of AI tools requires easy access to large high-quality anno
tated datasets\, which are both representative and diverse. The National C
ancer Institute (NCI) Imaging Data Commons (IDC) hosts over 50 TB of diver
se publicly available cancer image data spanning radiology and microscopy
domains. By harmonizing all data based on industry standards and colocali
zing it with analysis and exploration resources\, IDC aims to facilitate t
he development\, validation\, and clinical translation of AI tools and add
ress the well-documented challenges of establishing reproducible and tran
sparent AI processing pipelines. Balanced use of established commercial pr
oducts with open-source solutions\, interconnected by standard interfaces
\, provides value and performance\, while preserving sufficient agility to
address the evolving needs of the research community. Emphasis on the dev
elopment of tools\, use cases to demonstrate the utility of uniform data r
epresentation\, and cloud-based analysis aim to ease adoption and help de
fine best practices. Integration with other data in the broader NCI Cancer
Research Data Commons infrastructure opens opportunities for multiomics s
tudies incorporating imaging data to further empower the research communit
y to accelerate breakthroughs in cancer detection\, diagnosis\, and treatm
ent. The presentation will discuss the recent developments in IDC\, highli
ghting resources\, demonstrations and examples that we hope can help you i
mprove your everyday imaging research practices – both those that use publ
ic and internal datasets.
\n
\n
\n
\n
\n
\n
DTSTART;TZID=America/Los_Angeles:20240320T120000
DTEND;TZID=America/Los_Angeles:20240320T130000
LOCATION:Clark Center S360 - Zoom Details on IBIIS website @ 318 Campus Dri
ve
SEQUENCE:0
SUMMARY:IBIIS & AIMI Seminar – NCI Imaging Data Commons: Towards Transparen
cy\, Reproducibility\, and Scalability in Imaging AI
URL:http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/event/ibiis-ai
mi-seminar-nci-imaging-data-commons-towards-transparency-reproducibility-a
nd-scalability-in-imaging-ai/
X-COST-TYPE:free
X-WP-IMAGES-URL:thumbnail\;http://web.stanford.edu/group/radweb/cgi-bin/rad
calendar/wp-content/uploads/2024/03/Andrey-Fedorov.jpg\;200\;200\,medium\;
http://web.stanford.edu/group/radweb/cgi-bin/radcalendar/wp-content/upload
s/2024/03/Andrey-Fedorov.jpg\;200\;200\,large\;http://web.stanford.edu/gro
up/radweb/cgi-bin/radcalendar/wp-content/uploads/2024/03/Andrey-Fedorov.jp
g\;200\;200\,full\;http://web.stanford.edu/group/radweb/cgi-bin/radcalenda
r/wp-content/uploads/2024/03/Andrey-Fedorov.jpg\;200\;200
END:VEVENT
END:VCALENDAR