Location & Timing
August 5, 2020
8:30am-4:30pm
Livestream: details to come
This event is free and open to all!
Registration and Event details
Overview
Advancements of machine learning and artificial intelligence into all areas of medicine are now a reality and they hold the potential to transform healthcare and open up a world of incredible promise for everyone. Sponsored by the Stanford Center for Artificial Intelligence in Medicine and Imaging, the 2020 AIMI Symposium is a virtual conference convening experts from Stanford and beyond to advance the field of AI in medicine and imaging. This conference will cover everything from a survey of the latest machine learning approaches, many use cases in depth, unique metrics to healthcare, important challenges and pitfalls, and best practices for designing building and evaluating machine learning in healthcare applications.
Our goal is to make the best science accessible to a broad audience of academic, clinical, and industry attendees. Through the AIMI Symposium we hope to address gaps and barriers in the field and catalyze more evidence-based solutions to improve health for all.
Judy Gichoya, MD
Assistant Professor
Emory University School of Medicine
Measuring Learning Gains in Man-Machine Assemblage When Augmenting Radiology Work with Artificial Intelligence
Abstract
The work setting of the future presents an opportunity for human-technology partnerships, where a harmonious connection between human-technology produces unprecedented productivity gains. A conundrum at this human-technology frontier remains – will humans be augmented by technology or will technology be augmented by humans? We present our work on overcoming the conundrum of human and machine as separate entities and instead, treats them as an assemblage. As groundwork for the harmonious human-technology connection, this assemblage needs to learn to fit synergistically. This learning is called assemblage learning and it will be important for Artificial Intelligence (AI) applications in health care, where diagnostic and treatment decisions augmented by AI will have a direct and significant impact on patient care and outcomes. We describe how learning can be shared between assemblages, such that collective swarms of connected assemblages can be created. Our work is to demonstrate a symbiotic learning assemblage, such that envisioned productivity gains from AI can be achieved without loss of human jobs.
Specifically, we are evaluating the following research questions: Q1: How to develop assemblages, such that human-technology partnerships produce a “good fit” for visually based cognition-oriented tasks in radiology? Q2: What level of training should pre-exist in the individual human (radiologist) and independent machine learning model for human-technology partnerships to thrive? Q3: Which aspects and to what extent does an assemblage learning approach lead to reduced errors, improved accuracy, faster turn-around times, reduced fatigue, improved self-efficacy, and resilience?
Zoom: https://stanford.zoom.us/j/93580829522?pwd=ZVAxTCtEdkEzMWxjSEQwdlp0eThlUT09
Join us for the 3rd Annual Diversity and Inclusion Forum on Friday, October 9, 2020 on Zoom! This virtual event will highlight innovative workshops developed by our residents and fellows with their educational mentors who have participated in the 2019-2020 cohort of the Leadership Education in Advancing Diversity Program.
The event will be an enriching opportunity for all faculty, residents, fellows, postdocs, students, staff, and community members to learn tools and strategies to enable them to become effective change agents for diversity, equity, and inclusion in medical education.
All are welcome to participate and we look forward to seeing you on Friday, October 9!
Register here:
https://mailchi.mp/046c21726371/diversityforum2020-1632872?e=4a913cab2d
In honor of the 30th anniversary of the Americans with Disabilities Act and October as National Disability Employment Awareness Month, join the Stanford Medicine Abilities Coalition (SMAC) for a first of its kind StanfordMed LIVE event focused on disability. Now more than ever during the COVID-19 pandemic, disabilities, health conditions, and illness impact not only our patients but also all of us, both personally and as members of the Stanford Medicine community. Stanford Medicine leadership will share information, answer questions, and engage in a roundtable discussion about the state of disability at Stanford and how best to support faculty, staff, and students living with disability and chronic illness. We encourage our community to submit questions and comments here to be shared broadly with the Stanford Medicine community. The same link can be used to request any accommodations needed for the livestream. Additional information for the webcast itself will be sent out closer to the event.
Livestream link: https://livestream.com/accounts/1973198/events/9288854
Ge Wang, PhD
Clark & Crossan Endowed Chair Professor
Director of the Biomedical Imaging Center
Rensselaer Polytechnic Institute
Troy, New York
Abstract:
AI-based tomography is an important application and a new frontier of machine learning. AI, especially deep learning, has been widely used in computer vision and image analysis, which deal with existing images, improve them, and produce features. Since 2016, deep learning techniques are actively researched for tomography in the context of medicine. Tomographic reconstruction produces images of multi-dimensional structures from externally measured “encoded” data in the form of various transforms (integrals, harmonics, and so on). In this presentation, we provide a general background, highlight representative results, and discuss key issues that need to be addressed in this emerging field.
About:
AI-based X-ray Imaging System (AXIS) lab is led by Dr. Ge Wang, affiliated with the Department of Biomedical Engineering at Rensselaer Polytechnic Institute and the Center for Biotechnology and Interdisciplinary Studies in the Biomedical Imaging Center. AXIS lab focuses on innovation and translation of x-ray computed tomography, optical molecular tomography, multi-scale and multi-modality imaging, and AI/machine learning for image reconstruction and analysis, and has been continuously well funded by federal agencies and leading companies. AXIS group collaborates with Stanford, Harvard, Cornell, MSK, UTSW, Yale, GE, Hologic, and others, to develop theories, methods, software, systems, applications, and workflows.
Date: April 10, 2021 (8 AM-6PM)
-
- 8 AM-8:20 AM opening remarks Zainub and Pete
- 8:20 AM-9:20 AM Talk 1 “I fought the law and no one won”
- 10 minute Break
- 9:30 AM-10:30 AM talk 2 students and doctors with disabilities panel
- 20 minute break
-
- 10:50 AM-11:50 AM Breakout
- One hour lunch (TBD)
- 12:50 PM-1:50 PM Talk 3 the frontiers of disability research
- Lisa Meeks is moderating
- Bonnie Swenor invited
- 10 minute break
- 2:00 PM-3:00 PM breakout 2
- 10 minute break
- 3:10 PM-4:10 PM talk 4 do-it-yourself disability advocacy (Poullos/Tolchin with students)
- 4:10 PM-4:30 PM closing remarks
- 4:30 PM-6 PM virtual happy hour
Radiology Department-Wide Research Meeting
• Research Announcements
• Mirabela Rusu, PhD – Learning MRI Signatures of Aggressive Prostate Cancer: Bridging the Gap between Digital Pathologists and Digital Radiologists
• Akshay Chaudhari, PhD – Data-Efficient Machine Learning for Medical Imaging
Location: Zoom – Details can be found here: https://radresearch.stanford.edu
Meetings will be the 3rd Friday of each month.
Hosted by: Kawin Setsompop, PhD
Sponsored by: the the Department of Radiology
Stanford AIMI Director Curt Langlotz and Co-Directors Matt Lungren and Nigam Shah invite you to join us on August 3 for the 2021 Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI) Symposium. The virtual symposium will focus on the latest, best research on the role of AI in diagnostic excellence across medicine, current areas of impact, fairness and societal impact, and translation and clinical implementation. The program includes talks, interactive panel discussions, and breakout sessions. Registration is free and open to all.
Also, the 2nd Annual BiOethics, the Law, and Data-sharing: AI in Radiology (BOLD-AIR) Summit will be held on August 4, in conjunction with the AIMI Symposium. The summit will convene a broad range of speakers in bioethics, law, regulation, industry groups, and patient safety and data privacy, to address the latest ethical, regulatory, and legal challenges regarding AI in radiology.
Regina Barzilay, PhD
School of Engineering Distinguished Professor for AI and Health
Electrical Engineering and Computer Science Department
AI Faculty Lead at Jameel Clinic for Machine Learning in Health
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology
Abstract:
In this talk, I will present methods for future cancer risk from medical images. The discussion will explore alternative ways to formulate the risk assessment task and focus on algorithmic issues in developing such models. I will also discuss our experience in translating these algorithms into clinical practice in hospitals around the world.
Keynote:
Self-Supervision for Learning from the Bottom Up
Why do self-supervised learning? A common answer is: “because data labeling is expensive.” In this talk, I will argue that there are other, perhaps more fundamental reasons for working on self-supervision. First, it should allow us to get away from the tyranny of top-down semantic categorization and force meaningful associations to emerge naturally from the raw sensor data in a bottom-up fashion. Second, it should allow us to ditch fixed datasets and enable continuous, online learning, which is a much more natural setting for real-world agents. Third, and most intriguingly, there is hope that it might be possible to force a self-supervised task curriculum to emerge from first principles, even in the absence of a pre-defined downstream task or goal, similar to evolution. In this talk, I will touch upon these themes to argue that, far from running its course, research in self-supervised learning is only just beginning.