Welcome!
This course is an interdisciplinary practicum designed to get students with computational backgrounds and students with social science backgrounds to collaborate on building practical systems that address societal issues related to race and inequity. Readings will be drawn broadly from across the social sciences and computer science. We will discuss how to work with large, complex datasets and highlight the benefits of working directly with practitioners to tackle some of society’s most vexing social problems. Students interested in participating should complete the online application for permission at https://web.stanford.edu/class/cs329r/. Limited enrollment.
Acceptance in the course is by permission only. To apply, fill out the form here before September 9.
Course meeting time and place:
Tuesdays 1:30 - 4:00 pm, GSB Patterson 101
Instructors:
Jennifer L. Eberhardt
Email: jleberhardt@stanford.edu
Office Hours: Wednesday, 10 to 11 am at the GSB, Room E326, or by appointment
Dan Jurafsky
Email: jurafsky@stanford.edu
Class Office Hours: Tuesdays 4:00 – 5:00 pm, or by appointment
Teaching Partner:
Mirac Suzgun
Email: msuzgun@stanford.edu
Office Hours:
Week 1: 3 and 4pm on Thursday in Crocker Garden in Stanford Law School. Remaining weeks TBD, or by appointment
Course Description:
Course Description: This course is an interdisciplinary practicum designed to get students with computational backgrounds and students with social science backgrounds to collaborate on building practical language-related systems that address societal issues related to race and inequity. Readings will be drawn broadly from across the social sciences and computer science. Students will work with large, complex datasets and participate in research involving community partnerships relevant to race and natural language processing. Prerequisite: Graduate standing and instructor permission required. Limited enrollment.
Course Requirements
| Reaction Papers: | 20% of final grade |
| Class Participation: | 10% “ “ |
| Discussion Leadership: | 10% “ “ |
| Final Project: | 60% “ “ |
Due Dates:
| Discussion Question/Response: | Sundays/Mondays before 5:00 pm (each week) |
| Reaction Papers: | Mondays before 5:00 pm (two times during quarter) |
| Project Proposal: | Oct 7 midnight |
| Rough Draft of Project: | Nov 11 midnight |
| Project Presentation: | Dec 2 in class |
| Final Project Writeup | Dec 9 midnight |
Course Topics:
| Sep 23 | Introduction (including topics, requirements, and ice breakers) |
| Sep 30 | The Transmission of Bias and the Mechanics of Inequality |
| Oct 7 | How We Police |
| Oct 14 | How We Work |
| Oct 21 | How We Teach |
| Oct 28 | How We Treat |
| Nov 4 | NO CLASS – Democracy Day |
| Nov 11 | How We Connect |
| Nov 18 | How We Advance |
| Nov 25 | NO CLASS – Thanksgiving Recess |
| Dec 2 | Presentations of Final Projects |
Preparation, Attendance, and Participation:
It is important that you attend each session and complete the readings prior to class. The discussion and interaction during class time will be an integral part of the course.
Throughout, we will engage challenging questions about human behavior and society that intersect with our own lives. It is especially important that we all remain open to the expression of views that differ from our own and that we are willing to reconsider our own views. Maintaining humility about the rightness of one’s own views and appreciation for the insights of others are both essential to our being able to learn together. Of course, even as we challenge each other’s ideas and arguments, we want to maintain an atmosphere of respect and collegiality.
Reaction Papers:
You will be required to write two short reaction papers during the quarter. These papers should be approximately 2 pages (double spaced). The papers can be written for any week of your choosing, as long as they are posted to Canvas the day before the class meets to discuss the topic/readings about which you have written. These papers should not be descriptive summaries of the readings. Instead, they should offer a critical analysis. For example, you may choose to discuss a problem or limitation with one of the readings (or with several of the readings) and offer a better approach or method. You may propose a specific study to conduct. You may discuss a theme that seems to cut across all of the readings. You may propose a new theoretical framework for understanding a phenomenon discussed in the readings. There are many options. Regardless of the option you choose, you should strive to lay out a coherent, well-defended argument.
Discussion Questions:
Each week you will be asked to generate a thoughtful discussion question based on the readings, and a second thoughtful discussion response to other students' questions. The question/response will be factored into your participation grade. The questions will help us to understand common points of interest. And we will use them to help guide the discussion in class. These questions should not be descriptive. Instead, they should be probing, analytical, and thought-provoking. The quality of our discussions will critically depend on your contributions in this regard. Your questions should be posted on Canvas no later than 5 PM each Sunday, and the responses on Canvas no later than 5 PM each Monday.
Discussion Leadership:
Once during the quarter, you will be asked to help lead the discussion for the week. You are expected to meet with your group partner(s) beforehand to agree on the questions and issues you will use to frame and guide the discussion. You should be prepared to point out big themes. We will rely on you to place the research in context. What is the value of the work? Why does this work matter to the fields of linguistics, computer science, psychology, organizational behavior, comparative studies, or to the public at large? Does the research have policy implications worth exploring? Bring in your own expertise on the issues to help inform the class and to push us beyond the readings.
Final Project:
The final project is a chance to apply NLP to a societal issue related to race. During the course of the quarter, we will discuss projects that developed from partnerships with leaders in industry, schools, and police departments (with whom we have built ties over the years) as well as repos of social media data. The projects you propose could involve using text data to conduct analytic studies of race and inequality in partner domains, testing NLP-powered interventions like content moderation or training, or building other practical tools. For areas where data requires IRBs or long-term contracting, the project can be a proof-of-concept proposal, with an implementation applied to pilot or sample data. The projects can be done individually or with a partner; if with a partner, we recommend a cross-disciplinary mix of computer scientists and social scientists. Project milestones include:
- Proposal: Based on an initial investigation of community partner or other data, propose a high-level plan for an experimental study involving an NLP system
- Rough Draft: A first draft of the project, including progress on a first study.
- In-Class Presentation of Projects
- Project Writeup
Assigned Readings
Week 1: September 23
Course Introduction, Student Introductions, Ice Breakers and a Taste of What’s Ahead
Week 2: September 30
The Transmission of Bias and the Mechanics of Inequality
-
Wilkerson, I. (2020). America’s enduring caste system, New York Times Magazine, July 1.
-
Meltzoff, Andrew N., and Walter S. Gilliam. 2024. Young children and implicit racial biases. Daedalus 153, no. 1 (2024): 65-83.
-
Apfelbaum, Evan P., Michael I. Norton, and Samuel R. Sommers. 2012. Racial color blindness: Emergence, practice, and implications. Current Directions in Psychological Science 21, no. 3 (2012): 205-209.
-
Xiang, Alice. 2024. Mirror, Mirror, on the Wall, Who's the Fairest of Them All?. Daedalus 153, no. 1 (2024): 250-267.
-
Angelina Wang, Michelle Phan, Daniel E. Ho and Sanmi Koyejo. 2024. Fairness Through Difference Awareness: Measuring Desired Group Discrimination in LLMs. In Proceedings of the 63rd Annual Meeting of the Association forComputational Linguistics (Long Papers) (2025). Read: pp. 6867–6875.
Week 3: October 7
How We Police
-
Voigt, R., Camp, N. P., Prabhakaran,V., Hamilton, W. L., Hetey, R. C., Griffiths, C. M., Jurgens, D., Jurafsky, D., & Eberhardt, J. L. (2017). Language from police body camera footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences, 114, 6521-6526.
-
Camp. N. P., Voigt, R., Hamedani, M. G., Jurafsky, D., Eberhardt, J. L. (2024).
Leveraging body-worn camera footage to assess the effects of training on officer communication during traffic stops. PNAS Nexus 3 (9).
-
Eugenia H. Rho, Maggie Harrington, Yuyang Zhong, Reid Pryzant, Nicholas P. Camp, Dan Jurafsky, and Jennifer L. Eberhardt. 2023. Escalated police stops of Black men are linguistically and psychologically distinct in their earliest moments. Proceedings of the National Academy of Sciences 120 (23).
- Checking out this database ( https://clean.calmatters.org/)
Week 4: October 14
How We Work
- Haozhe An, Christabel Acquaye, Colin Wang, Zongxia Li, and Rachel Rudinger. 2024. Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender? Proceedings of ACL 2024, 386–397
- Rishi Bommasani, Sarah H. Bana, Kathleen A. Creel, Connor Toups, Dan Jurafsky, and Percy Liang. 2025. Hiring Algorithms in Practice: Bias and Homogeneity. Manuscript under review. SI
- Amit Haim, Alejandro Salinas, Julian Nyarko. 2024. What's in a Name? Auditing Large Language Models for Race and Gender Bias. Manuscript
- Lyons-Padilla, S. Markus, H. R., Monk, A. Radhakrishna, S., Shah, R., Dodson, N. A., & Eberhardt, J. L. (2019) Race influences professional investors’ financial judgments. Proceedings of the National Academy of Sciences, 116, 17225-17230.
Week 5: October 21
How We Teach
- Walton, G., Okonofua, J.A., Cunningham, K. R., Hurst, D., Pinedo, A., Weitz, E., Ospina, J. P., Tate, H., Eberhardt, J. L. 2021. Lifting the bar: A relationship-orienting intervention reduces recidivism among children reentering school from juvenile detention. Psychological Science, 32 (11), 1747-1767.
-
Darling-Hammond, S., Ruiz, M., Eberhardt, J.L., & Okonofua, J. A. 2023. The dynamic nature of student discipline and discipline disparities. Proceedings of the National Academy of Sciences.
- Markowitz, D. M., Kittelman, A., Girvan, E. J., Santiago-Rosario, M. R., & McIntosh, K. 2023. Taking Note of Our Biases: How Language Patterns Reveal Bias Underlying the Use of Office Discipline Referrals in Exclusionary Discipline. Educational Researcher, 52(9), 525–534. https://doi.org/10.3102/0013189X231189444
- Mei Tan and Dorottya Demszky. 2025. Sit Down Now: How Teachers’ Language Reveals the Dynamics of Classroom Management Practices. Working Paper.
Week 6: October 28
How We Treat
-
Yang, Yifan, Xiaoyu Liu, Qiao Jin, Furong Huang, and Zhiyong Lu. 2024. Unmasking and quantifying racial bias of large language models in medical report generation. Nature Communications Medicine, (2024): 176.
-
Travis Zack, Eric Lehman, Mirac Suzgun, Jorge A Rodriguez, Leo Anthony Celi, Judy Gichoya, Dan Jurafsky, Peter Szolovits, David W Bates, Raja-Elie E Abdulnour, Atul Butte, Emily Alsentzer. 2024. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. The Lancet Digital Health, 6:1, e12-e22.
-
Omar, M., Soffer, S., Agbareia, R., Bragazzi, N.L., Apakama, D.U., Horowitz, C.R., Charney, A.W., Freeman, R., Kummer, B., Glicksberg, B.S., Nadkarni, G.N., and Klang, E. 2025. Sociodemographic biases in medical decision making by large language models. Nature Medicine (2025): 1-9.
-
Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa. 2025. Using large language models to promote health equity. NEJM AI.
Week 7: November 4 (NO CLASS – Democracy Day)
Week 8: November 11
How We Connect
- Lee, C., Gligorić, K. Kalluri, P. R., Harrington, M., Durmus, E., Sanchez, K. L., San, Y., Tes, D., Zhao, X., Hamedane, M. G., Markus, H. R., Jurafsky, D., and Eberhardt, J. L. (2024). People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise. Proceedings of the National Academy of Sciences.
-
Shaikh, Omar, Valentino Emil Chai, Michele Gelfand, Diyi Yang, and Michael S. Bernstein. 2024. "Rehearsal: Simulating conflict to teach conflict resolution." In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-20.
Rachel Wetts and Robb Willer. 2025. Antiracism and its Discontents: Opposition to Antiracism is a Widespread and Politically Influential Racial Ideology among White Americans. SocArXiv.
Ashwini Ashokkumar, Luke Hewitt, Isaias Ghezae, Robb Willer. 2025. Predicting Results of Social Science Experiments Using Large Language Models. Manuscript.
Week 9: November 18
How We Advance
-
Reddan, M.C., Garcia, S., Golarai, G., Eberhardt, J. L., Zaki, J. (2024). A film intervention increases understanding of formerly incarcerated people and support for criminal justice reform. Proceedings of the National Academy of Sciences.
- Demszky, Dorottya, C. Lee Williams, Shannon T. Brady, Shashanka Subrahmanya, Eric Gaudiello, Gregory M. Walton, and Johannes C. Eichstaedt. (2024). Computational Language Analysis Reveals that Process-Oriented Thinking About Belonging Aids the College Transition. (EdWorkingPaper: 24-1033). Retrieved from Annenberg Institute at Brown University. https://doi.org/10.26300/5mm8-7m81.
- Argyle, Lisa P., Christopher A. Bail, Ethan C. Busby, Joshua R. Gubler, Thomas Howe, Christopher Rytting, Taylor Sorensen, and David Wingate. 2023. Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences 120, no. 41 (2023): e2311627120.
- Faiz Surani∗, Mirac Suzgun∗, Vyoma Raman, Christopher D. Manning, Peter Henderson, and Daniel E. Ho. 2024. AI for Scaling Legal Reform: Mapping and Redacting Racial Covenants in Santa Clara County. Draft.

