As we increasingly integrate AI into our lives, addressing the challenges that arise requires both technical expertise and governance strategies. This new course empowers students to navigate the complex intersection of technology and policy, equipping them with the tools to understand and shape the future of AI governance. Designed for students from all backgrounds, the course explores AI governance at the organizational, national, and international levels. Through in-depth analysis of current frameworks and mechanisms, students will assess how governance relies on technical measures and examine their feasibility within today's technological landscape.
By the end of the class, students will have the knowledge to critically engage with AI governance issues and to contribute meaningfully to the development of governance strategies in their future careers, whether in policy, corporate management, or technical development.
Anka Reuel serves as one of the vice chairs for the EU's first General-Purpose AI Code of Practice which implements the EU AI Act and specifies concrete obligations for foundation model providers to show compliance with the AI Act. She is also a Computer Science Ph.D. Candidate at Stanford University and a Technology and Geopolitics Fellow at the Belfer Center at Harvard Kennedy School. Anka conducts technical AI governance research at the Stanford Trustworthy AI Research Lab and the Stanford Intelligent Systems Laboratory. She is also the lead writer for the AI Chapter of the 2024 Stanford Emerging Technology Review and the Lead Researcher for the Responsible AI Chapter of Stanford's AI Index. She holds masters degrees from the University of Pennsylvania and the London School of Economics.
Max Lamparth is a postdoctoral fellow with the Stanford Center for AI Safety, the Center for International Security and Cooperation, and conducts research on interpretability and robustness of AI systems in Clark Barrett's group at the CS Department. With his research, he wants to make AI systems inherently more secure and safe, provide critical insights to inform and guide effective AI policies, and shape public discourse. Besides scientific publications in technical and socio-technical conferences, Max also authored op-eds for, e.g., Foreign Affairs, and is the creator of and instructor for CS 120: Introduction to AI Safety at Stanford. Max holds a Ph.D. from the Technical University of Munich and a B.Sc. and M.Sc. from the Ruprecht Karl University of Heidelberg.
Sanmi Koyejo is an Assistant Professor in the Department of Computer Science at Stanford University. Sanmi was previously an Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. His research interests are in developing the principles and practice of trustworthy machine learning and how to apply these findings to inform better policymaking. Sanmi's work won multiple awards, including a NeurIPS 2023 best paper awards for his team's work on whether emergent abilities are a mirage. This work has subsequently strongly influenced the political discourse on emergent abilities.
Paul Edwards is the director of the Program in Science, Technology & Science (STS) and Senior Research Scholar at CISAC, as well as Professor of Information and History at the University of Michigan. At Stanford, his teaching includes courses in the Ford Dorsey Program in International Policy Studies and the Program in Science, Technology & Society. His research focuses on the history, politics, and culture of knowledge and information infrastructures.
| Week | Date | Lecturer | Topic |
|---|---|---|---|
| Week 0 | 01/10/25 | Max | A (Brief) Introduction to AI/ML |
| Week 1 | 01/10/25 | Anka/Max | What Is AI Governance And Why Do We Need It? |
| Week 2 | 01/17/25 | Anka | Balancing the Need for Data with Transparency and Copyright Considerations |
| Week 3 | 01/24/25 | Anka | What Makes Good Evaluations and Why Are They Important? |
| Week 4 | 01/31/25 | Anka | DeepSeek: An AI Governance Case Study |
| Week 5 | 02/07/25 | Max | Jailbreaks, Adversarial Attacks, and Red Teaming |
| Week 6 | 02/14/25 | Max | Open- vs. Closed-Source Models |
| Week 7 | 02/21/25 | Anka | The US, the EU's, and China's Way of Governing AI |
| Week 8 | 02/28/25 | Anka | Existing and Proposed International AI Governance Frameworks |
| Week 9 | 03/07/25 | Max | Implicit Values in AI Systems: Global Perspectives on Ethics, Safety, and Governance |
| Week 10 | 03/14/25 | Anka/Max | The Future of AI and AI Governance |
This form is completely anonymous and a way for you to share your thoughts, concerns and ideas with the STS 14/CS 134 teaching team.
You are welcome to audit the class! Please reach out to Anka or Max if you want to audit the class to ensure we do not reach the capacity of the classroom.
Please note that auditing is only allowed for matriculated undergraduates, matriculated graduate/professional students, postdoctoral scholars, visiting scholars, Stanford faculty, and Stanford staff. After checking with us, please fill out this form and submit it. Non-Stanford students cannot audit the course. The current Stanford auditing policy is stated here.
Also, if you are auditing the class, please note that audited courses are not recorded on an academic transcript and no official records are maintained for auditors. There will not be any record that they audited the course.
Violating the Honor Code is a serious offense, even when the violation is unintentional. The Honor Code is available here. Students are responsible for understanding the University rules regarding academic integrity. In brief, conduct prohibited by the Honor Code includes all forms of academic dishonesty including and representing as one's own work the work of another. If students have any questions about these matters, they should contact Anka or Max.
This class provides a setting where individuals of all visible and nonvisible differences- including but not limited to race, ethnicity, national origin, cultural identity, gender, gender identity, gender expression, sexual orientation, physical ability, body type, socioeconomic status, veteran status, age, and religious, philosophical, and political perspectives-are welcome. Each member of this learning community is expected to contribute to creating and maintaining a respectful, inclusive environment for all the other members. If students have any concerns please reach out to Professor Koyejo.
Students who need an academic accommodation based on the impact of a disability must initiate the request with the Office of Accessible Education (OAE). Professional staff will evaluate the request with required documentation, recommend reasonable accommodations, and prepare an Accommodation Letter for faculty dated in the current quarter in which the request is being made. Students should contact the OAE as soon as possible since timely notice is needed to coordinate accommodations. The OAE is located at 563 Salvatierra Walk (phone: 723-1066, URL: http://oae.stanford.edu).
Each week, students are expected to do the required readings and submit a short paper (if they opted for the short-paper option). Towards the end, students can submit a final long paper if they chose the final paper option. Final long papers can range from running technical experiments to writing research papers on AI-governance-related topics to accommodate for different backgrounds. The grading breakdown is:
[Updated grading table after grading final long papers in favor of all students.]
| Letter Grade | Percentage |
|---|---|
| A | 86-100% |
| A- | 83-85% |
| B+ | 80-82% |
| B | 76-79% |
| B- | 73-75% |
| C+ | 69-72% |
| Letter Grade | Percentage |
|---|---|
| C | 66-68% |
| C- | 63-65% |
| D+ | 59-62% |
| D | 56-58% |
| D- | 52-55% |
| F | 0-51% |
This course offered for either a letter or credit/no credit grade. If taken for credit/no credit, credit will be given to students who score a C- or higher (at least 70% in the course). We will use the standard breakdowns in the table above. We will round fractional percentages in your favor. Every year, a few students are awarded an A+ after careful consideration for demonstrating mastery beyond what is expected in this class at the discretion of the course staff; it is not determined solely based on percentage.
You'll have a choice in this class to either write a weekly two-page short paper or a final long paper. Your final assignment grade will be determined either by the average score of your short papers OR the grade on your long final paper, whichever is higher. This means that you can choose to only write a final long paper at the end of the course and no short papers, or only short papers and no final long paper. However, we recommend that you submit short papers throughout the course to receive feedback over time.
For Lectures 2 to 9, you will submit a 2-page short paper for each lecture.
Each paper must:Note: The 2-page limit will be strictly enforced. Points will be deducted if you exceed the limit. Being concise is a key skill in governance contexts, and this exercise is designed to help you develop that skill.
Instead of writing a series of short paper, you can also opt to write one final long paper. You can change your decision throughout the quarter without informing us, i.e., even if you have already submitted two short papers, you can still opt to write the final long paper. We will take the higher of both grades as your final assignment grade.
The first two pages should include:
It is not sufficient to only cite papers from the curriculum. You are expected to explore further related work. A good starting point could be to examine the references in a lecture paper or look up which works cite that paper online. Your final 2 to 3 pages should include:
The middle section (pages 3 onward) will depend on the nature of your project, in particular, whether you do a technical or non-technical paper. We encourage you to study different papers from the reading list to get a better feel for how they approach their topics.
For project ideas, you can also study recent publications from different conferences and workshops:
We do not expect you to write a final long paper on par with any of these publications but they should be your North Star. If you are unsure about the appropriate project scope, but have a topic in mind, we can discuss details after class or in office hours. We will further give some more details about final projects in week 5 or 6.
All students get 7 late days at the start of the course. [We increased the number of late days from 6 to 7 during the quarter.]
All classes have mandatory attendance.
The rest of this document contains the schedule of assigned and optional readings for each week. Course slides and lecture recordings (if available) will be linked below, though we do not guarantee that all lectures will be recorded, especially with guest speakers.
Readings can be subject to change throughout the course, but will not change more than 14 days in advance. Please check the curriculum here rather than a printed or duplicated copy for the most up-to-date content.
| Week | Date | Lecturer | Topic |
|---|---|---|---|
| Week 0 | Pre-recorded | Max | [Optional] A (Brief) Introduction to AI/ML |
|
Summary: We'll provide one optional pre-recorded lecture on key technical terms that all students should understand, including but not limited to: stochastic gradient descent, regression, classification, fine-tuning, neural networks, deep learning, foundation models, chain-of-thought reasoning, in-context-learning, zero-shot learning, and fine-tuning. This lecture will be strongly recommended for students without an AI/ML background. Lecture (from CS120) Slides + Recording Readings (Required):
|
|||
| Week 1 | 01/10/25 | Anka/Max | What Is AI Governance And Why Do We Need It? |
|
Summary: The first week will be about introducing AI governance, what it is, and why we need it. We will cover the risks and potential benefits of AI and why we (also) need AI governance to manage them. We’ll further introduce the structure of this course, based on three AI governance levels (organizational, national, and international) and introduce the difference between technical and non-technical AI governance. Lecture Slides Readings (Required):
|
|||
| Week 2 | 01/17/25 | Max | Balancing the Need for Data with Transparency and Copyright Considerations |
|
Summary: This week, we will explore disputes like the one between OpenAI and The New York Times, examining nuanced issues such as output similarity as copyright violation versus the use of copyrighted data during training, and potential implications of the outcome of such processes for model developers. We will also cover scaling laws that seemingly dictate the need for large volumes of data to enhance model performance under the current paradigm, creating a friction between technological advancement and ethical/legal obligations. This session will also touch on the broader issue of transparency—a key concern for organizations—not just in data usage but extending to all facets of AI development and deployment. Lecture Slides Readings (Required):
|
Week 3 | 01/24/25 | Anka | What Makes Good Evaluations? And Why Do We Need Them? |
|
Summary: In this lecture, we will address the question of what makes AI evaluations effective and reliable. We will begin by examining the issues in current evaluation practices, where reproducibility, lack of validity, and missing statistical significance hinder meaningful comparisons and practical utility. We will further explore the role evaluations play in identifying risks, their importance in guiding governance efforts, and the downstream consequences of the challenges posed by inadequate evaluations. Drawing on recent research, we will discuss frameworks like the "dimensions of evaluation design," which highlight key considerations such as task type, metrics, and duration, and their relevance across diverse evaluation contexts. We will critique existing benchmarking practices, emphasizing the urgent need for statistically valid, interpretable, and reproducible evaluations. Finally, we will address structural and policy challenges, including the necessity of "safe harbor" provisions to enable independent, good-faith assessments free from corporate or legal constraints. Lecture Slides Readings (Required):
|
Week 4 | 01/31/25 | Anka | DeepSeek: An AI Governance Case Study |
|
Lecture Slides Readings (Required):
|
|||
| Week 5 | 02/07/25 | Max | Jailbreaks, Adversarial Attacks, and Red Teaming |
|
Summary: This lecture addresses the vulnerabilities of LLMs to jailbreaks and adversarial attacks, highlighting two primary failure modes: competing objectives between safety and functionality, and mismatched generalization to unseen inputs. We will explore strategies like multi-layered audits (governance, model, and application levels) to identify risks and reinforce accountability, alongside robust red-teaming practices to simulate adversarial scenarios. Emphasis will be placed on aligning safety mechanisms with model capabilities to foster transparency through standardized disclosures, and to understand what practical frameworks can help organizations to proactively mitigate risks at both system and organizational levels. Lecture Slides
|
|||
| Week 6 | 02/14/25 | Max | Open- vs. Closed-Source Models |
|
Summary: The debate between open- and closed-source AI models presents significant implications for governance, innovation, and societal impact. This week, we explore the risks and benefits associated with open-sourcing models and the challenges it poses for effective national AI governance. While open-source models promote transparency, collaboration, and accelerated technological advancement, they also raise concerns about misuse, security vulnerabilities, and difficulty in enforcing regulations. Conversely, closed-source models limit access and are often more opaque and difficult to scrutinize, which can equally hinder national governance interventions and oversight. We will specifically look into how access restrictions impact regulatory frameworks and examine the debate surrounding California's SB1047 bill and its potential effects on the open-source community. Lecture Slides Readings (Required):
|
|||
| Week 7 | 02/21/25 | Anka | The US, the EU's, and China's Way of Governing AI |
|
Summary: We will explore the vertical versus horizontal approaches to national AI regulation, comparing how each region balances innovation with ethical considerations and risk management. Key legislative efforts such as the EU AI Act, the US Executive Order on AI, and China's Administrative Measures for Generative AI Services will be analyzed to understand their objectives, their obligations for developers, and resulting implications. In addition, one discussion will focus on the role of technical standards in operationalizing high-level regulatory requirements, where we'll also explore the Code of Practice process of the EU AI Act and how multi-stakeholder input and in particular the challenge of leveraging industry expertise in drafting national AI regulations while avoiding regulatory capture is navigated. Lecture Slides Readings (Required):
|
|||
| Week 8 | 02/28/25 | Anka | Existing and Proposed International AI Governance Frameworks |
|
Summary: This lecture examines existing and proposed international frameworks for governing AI. It covers current initiatives such as UNESCO's Ethical AI Recommendation, the Global Partnership on AI, the UN High-level AI Advisory Body, and the Hiroshima AI Process. The lecture also analyzes the similarities and differences between AI and past emerging technologies to assess whether governance structures for previous new technologies like aviation or nuclear energy can serve as models for AI governance. We will discuss in detail jurisdictional certification as one of the first concretely suggested approaches to international AI governance, as well as proposals for an international 'CERN for AI.' Finally, we will analyze whether these frameworks are concrete enough for developers and identify areas that are underspecified from a technical perspective. Lecture Slides Readings (Required):
|
|||
| Week 9 | 03/07/25 | Max | Implicit Values in AI Systems: Global Perspectives on Ethics, Safety, and Governance |
|
Summary: Drawing on recent research, we will examine how AI systems often overfit to Western norms and values, marginalizing global and local perspectives in safety and ethical considerations. Key topics include the use of cross-national surveys and multilingual datasets to capture diverse opinions, technical approaches to mitigate such biases, and strategies for ensuring equitable representation in AI governance and policymaking. This lecture will further explore which areas of AI governance require global coordination and critically assess to what extent global frameworks could –or should– address concerns related to overly Western-centric language models. Lecture Slides Readings (Required):
|
|||
| Week 10 | 03/14/25 | Anka/Max | The Future of AI and AI Governance |
|
Summary: This lecture explores the future of AI and its governance, focusing on technical developments that could influence both AI capabilities and regulatory frameworks. We will examine arguments for and against the idea that AI poses an existential risk to humanity and consider what technological advancements might be required to achieve AGI. The session will cover anticipated technical progress in areas like autonomous agents, spatial intelligence, reasoning, and methods to counteract hallucinations and make models more robust, assessing how these developments may challenge or benefit existing governance approaches. Finally, we will discuss the implications of these advancements for AI governance frameworks and structures, considering how they might need to be adapted to address future technological scenarios. Lecture Slides Readings (Required):
|
|||