<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Michael Gensheimer's research page - Teaching</title><link href="https://web.stanford.edu/~mgens/" rel="alternate"/><link href="https://web.stanford.edu/~mgens/feeds/teaching.atom.xml" rel="self"/><id>https://web.stanford.edu/~mgens/</id><updated>2022-09-10T00:00:00-07:00</updated><entry><title>Trial results change over time</title><link href="https://web.stanford.edu/~mgens/trial-results-change-over-time.html" rel="alternate"/><published>2022-09-10T00:00:00-07:00</published><updated>2022-09-10T00:00:00-07:00</updated><author><name>Michael Gensheimer</name></author><id>tag:web.stanford.edu,2022-09-10:/~mgens/trial-results-change-over-time.html</id><summary type="html">&lt;p&gt;I recently read the long-term follow-up results from the NLST lung cancer screening trial, which got me thinking about how trial results can change over time as the initial benefits of an intervention wear off. To explore this idea, I created a simple simulation to show how the efficacy signal …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently read the long-term follow-up results from the NLST lung cancer screening trial, which got me thinking about how trial results can change over time as the initial benefits of an intervention wear off. To explore this idea, I created a simple simulation to show how the efficacy signal can get diluted as more years of data are added. In the original NLST study, screening with chest CTs showed clear benefits for lung cancer survival after 3 years, but after 12 years of follow-up, the difference in survival between the screened and unscreened groups was no longer statistically significant. My simulation used a toy example with a blood sugar-lowering medication to demonstrate how this can happen.&lt;/p&gt;
&lt;p&gt;See the full details on Google Colab: &lt;a href="https://colab.research.google.com/drive/1sXvXmcJtGp_T2JgSY1ckiq7xk_xGFdgt?usp=sharing"&gt;link&lt;/a&gt;&lt;/p&gt;</content><category term="Teaching"/><category term="trials"/></entry><entry><title>Change in patient population can affect AUC</title><link href="https://web.stanford.edu/~mgens/change-in-patient-population-can-affect-auc.html" rel="alternate"/><published>2020-06-30T00:00:00-07:00</published><updated>2020-06-30T00:00:00-07:00</updated><author><name>Michael Gensheimer</name></author><id>tag:web.stanford.edu,2020-06-30:/~mgens/change-in-patient-population-can-affect-auc.html</id><summary type="html">&lt;p&gt;I made a demo showing how changes in a patient population can affect the performance of a biomarker or classifier, specifically measured by AUC (Area Under the Receiver Operating Characteristic Curve). Using a simulated dataset, I first showed that hemoglobin A1c had an AUC of 0.77 for predicting foot …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I made a demo showing how changes in a patient population can affect the performance of a biomarker or classifier, specifically measured by AUC (Area Under the Receiver Operating Characteristic Curve). Using a simulated dataset, I first showed that hemoglobin A1c had an AUC of 0.77 for predicting foot ulcers in a population of diabetic patients. But when I added 1,000 non-diabetic patients who didn’t develop ulcers, the AUC jumped to 0.93. This illustrates how a more diverse or broader patient population can artificially inflate performance metrics, highlighting the need for careful evaluation of AUC when comparing models across different datasets.&lt;/p&gt;
&lt;p&gt;See the full details on Google Colab: &lt;a href="https://colab.research.google.com/drive/1akAk76Fi3wyOoXptHQks8owb7rQMvBvQ?usp=sharing"&gt;link&lt;/a&gt;&lt;/p&gt;</content><category term="Teaching"/><category term="machine learning"/></entry></feed>