Week | Monday | Wednesday | Friday (or Lab Th/Fr) | |
---|---|---|---|---|
1 | April 1 1. Welcome to CS110 / Intro to Filesystems presentation:small Slides, [PDF], Screencast document:small Handout: CS110 Course Information document:small Handout: CS110 Course Syllabus book_1:small Reading: Skim S & K Chapter 2 Introductions, course administration and expectations. We will introduce Unix file systesms today, as well. |
April 3 2. File Systems, APIs, and System Calls presentation:small Slides, [PDF], Screencast code:small Code (lambdas) code:small Code (file systems) notepad:small Assignment 1 released We need to invest some time learning about some C and some UNIX/Linux libraries that offer us programmatic access to the file system. There are a few data structures and a collection of Linux library functions that allow us to crawl over the tree of files and directories, and there are even more functions that grant us raw, low-level access to the file contents. We'll invest some time discussing these functions and data structures—enough that we can implement a collection of obvious and not-so-obvious programs that emulate some of the terminal/shell builtins you've been using your entire UNIX lives. |
April 5 3. (Lecture) More on Filesystems APIs presentation:small Slides, [PDF], Screencast book_1:small Reading: S & K Chapter 2 We'll work through a few more UNIX Filesystem API examples before advancing on to a discussion of how the raw hardware of a disk is leveraged to look like the filesystem you've taken for granted for the entirety of your UNIX and programming careers. My hope is that by the end of the day you'll have gained a familiarity with the filesystem APIs and begin to understand how filesystems work under the hood. |
|
2 | April 8 4. Filesystems Data Structures and System Calls presentation:small Slides, [PDF], Screencast book_1:small Reading: Begin Bryant & O'Hallaron, Chapters 2 and 1 (in that order). Chapters 2 and 1 correspond to Chapters 10 and 8 of the full textbook. code:small Code We need to work through the rest of our discussion of how a filesystem is layered over raw hardware. We started to explain how basic files—be they text files, audio files, executable images, etc— are segmented into blocked and distributed across free sectors of the hardware. We need to advance that discussion to be clear how filesystem supplorts the notion of a directory, and how an absolute path name like /usr/class/cs110/WWW/index.html can be parsed and translated to an inode number, which is the location within the inode table that stores metainformation about that file and where all of its payload blocks live.We'll then discuss a few things as a way of transitioning to multiprocessing: how open file sessions are maintained by the OS on a per-process basis, and how system calls differ from traditional function calls. And time permitting, we'll introduce a new system call that's used to spawn new processes. |
April 10 5. Understanding execvp presentation:small Slides, [PDF], Screencast book_1:small Reading: Begin: Bryant & O'Hallaron, Chapters 2 and 1 (in that order). Chapters 2 and 1 correspond to Chapters 10 and 8 of the full textbook. code:small Code notepad:small Assignment 2 released
This past Monday we introduced |
April 11/12 Lab / Discussion 01. File System Experimentation notepad:small Lab 01: File Systems and System Calls (solution)
Your first lab will have you discuss the various ways you can exploit a file system design to implement other functions. You'll also get a chance to play with the |
|
3 | April 15 6. execvp , Pipes, Interprocess Communicationpresentation:small Slides, [PDF], Screencast book_1:small Reading: Continue Reading: Bryant & O'Hallaron, Chapters 2 and 1 (in that order). Chapters 2 and 1 correspond to Chapters 10 and 8 of the full textbook. code:small Code Last time, we introduced execvp as the one system call that reboots a process to execute the main function of a new executable. We need to advance that discussion today by working through the implementation of a tiny shell, much like the one you're using whenever you'll using Unix. We'll then introduce the notion of a pipe as a way to set up communication channels between multiple processes.
|
April 17 7. Signals, Signal Handlers, and Signal Blocks presentation:small Slides, [PDF], Screencast book_1:small Reading: Begin: Bryant & O'Hallaron, Chapters 2 and 1 (in that order). Chapters 2 and 1 correspond to Chapters 10 and 8 of the full textbook. code:small Code We need to discuss signal masks, signal blocking, and how they can be used properly to avoid concurrency issues. We'll start with a contrived example where code is technically capable of executing in an order we don't intend, and we'll use signal masks and blocking to fix the problem. We'll then advance to work through a series of improvements to the simplesh program that we wrote last time.
|
April 18/19 notepad:small (Thursday) Assignment 3 released (Overview Video)
Lab / Discussion 02: Multiprocessing Experimentation notepad:small Lab 02 Solution
This week we'll investigate the use of
The lab handout includes a collection of laptop exercises to elevate your |
|
4 | April 22 8. Race Conditions, Deadlock, and Data Integrity presentation:small Slides, [PDF], Screencast book_1:small Reading: Begin: Bryant & O'Hallaron, Chapters 2 and 1 (in that order). Chapters 2 and 1 correspond to Chapters 10 and 8 of the full textbook. code:small Code We will discuss some subtle yet interesting race conditions that can lead to a state called deadlock, where a process seemingly freezes. We will also talk about how to properly use signal handlers to clean up all children. |
April 24 9. Introduction to Threads presentation:small Slides, [PDF], Screencast book_1:small Reading: Continue reading: Bryant & O'Hallaron, Chapters 4, skipping section 4.2. Chapter 4 refers to Chapter 12 of the full textbook. code:small Code We will spend some time explaining the various OS components that allow multiple processes to be (seemingly) running at the same time, even when there's just one CPU. I also want to discuss how each process can operate as if it owns all of memory, even though it clearly doesn't. We'll be discussing virtual memory and the process scheduler to advance our understanding of how these things might work. We'll then move on to our discussion of mutlithreading and begin to work through the lecture examples I've provided today. |
April 25/26 Lab / Discussion 03: Parallel Programming notepad:small Lab 03: Parallel Programming notepad:small Lab 03 Solution We'd like you to spend the 80 minutes studying a parallel implementation of mergesort so you understand its architecture and ultimately believe that it works. Time permitting, I'd like you to work through a collection of short answer questions to solidify your understanding of virtual memory and process scheduling. |
|
5 | April 29 10. From C Threads to C++ Threads presentation:small Slides, [PDF], Screencast book_1:small Reading: Continue reading: Bryant & O'Hallaron, Chapters 4, skipping section 4.2. Chapter 4 refers to Chapter 12 of the full textbook. code:small Code notepad:small (Sunday, Apr. 28) Assignment 4 released We started in on threading last time, and we need to continue talking about it by working through a collection of introductory examples that illustrate how threads work and the types of concurrency issues that sometimes present themselves because multiple threads are running within a process at the same time. We'll do precisely this by working through a collection of examples in C, and then transition over to C++, which in my opinion provides much better support for threading and concurrency than pure C does. |
May 1 11. Threads, Condition Variables, Semaphores presentation:small Slides, [PDF], Screencast book_1:small Reading: Continue reading: Bryant & O'Hallaron, Chapters 4, skipping section 4.2. Chapter 4 refers to Chapter 12 of the full textbook. code:small Code We need to finish up the ticket agents example from this past Monday and understand how the mutex can be used to mark the boundaries of critical regions so that at most one thread is within it at any one moment.We'll then move on to a new example—the dining philosophers simulation—to illustrate how a system can arrive at deadlock and forever starve all threads of from CPU time. That'll be our opportunity to introduce the condition variable and the semaphore as a means of preventing deadlock from ever happening. |
May 2 Midterm Exam calendar_ok:small Thur. May 2nd alarm:small 6:00pm-8:00pm map:small Location: TBA (No labs this week) |
|
6 | May 6 12. More on Multithreading, CVs, and Semaphores (and some Networking) presentation:small Slides, [PDF], Screencast book_1:small Reading: Finish Reading: Bryant & O'Hallaron, Chapters 4, skipping section 4.2. Chapter 4 refers to Chapter 12 of the full textbook. code:small Code notepad:small Multithreading and Synchronization Redux
We'll continue our discussion of the |
May 8 13. Ice Cream Store Simulation presentation:small Slides, [PDF], Screencast book_1:small Reading: Finish Reading: Bryant & O'Hallaron, Chapters 4, skipping section 4.2. Chapter 4 refers to Chapter 12 of the full textbook. code:small Code notepad:small Assignment 5 released We need to finish up the final implementation of the myth-buster example from this past Monday before moving on to one huge multithreaded simulation that brings in every single concurrency pattern and relies on virtually every concurrency directive we're learned. This final example is so large, that it comes with its very own handout. |
May 9/10 Lab / Discussion 04: assign3 Redux, Threads notepad:small Lab 04: assign3 Redux, Threadsnotepad:small Lab 04 Solution: We'd like to spend this week's lab reviewing everything about Assignment 3 that made it challenging. By now, you're well aware how difficult multiprocessing and concurrency can be, but it's not unusual for students to arrive at working Assignment 3 solutions even though they didn't quite understand everything they wrote. Problem 1 has you revisit those programs and work through why they needed to be written the way they were. The remaining three questions all deal with multithreading. We think all three questions are instructive, but it's probably more important to work through the short answer questions of Problems 3 and 4. Problem 2 is a coding question, and that can be managed independently, outside of lab time. |
|
7 | May 13 14. Introduction to Networking presentation:small Slides, [PDF], Screencast book_1:small Reading: Continue Reading: Bryant & O'Hallaron, Chapter 3 and Section 4.2. Chapter 3 of the reader corresponds to Chapter 11 of the full textbook. code:small Code After a bit of multithreading wrapup, we'll move on to formally introduce networking, the client-server idiom, request-response, and argue why a network connection is little more than a bidirectional pipe between processes on different machines. |
May 15 15. Networking, Clients presentation:small Slides, [PDF], Screencast book_1:small Reading: Continue Reading: Bryant & O'Hallaron, Chapter 3 and Section 4.2. Chapter 3 of the reader corresponds to Chapter 11 of the full textbook. code:small Code We are going to move from implementing our first client into an emulation of the wget utility that downloads web pages from the Internet. We are then going to move on to building an API server using threads and processes.
|
May 16/17 notepad:small Assignment 6 released (Thursday)
Lab / Discussion 05: notepad:small Lab 05 Solution
We'll work through the implementation of a Once we work through those, we'll tinker with a simple server and exploit the fact that server sockets and client sockets are internally managed as descriptors. |
|
8 | May 20 16. Network System Calls presentation:small Slides, [PDF], Screencast book_1:small Reading: Finish Reading: Bryant & O'Hallaron, Chapter 3 and Section 4.2. Chapter 3 of the reader corresponds to Chapter 11 of the full textbook. code:small Code We will provide an overview of networking and explain why it's really just a distributed form of function call and return. And in spite of the fact that function calls across the internet are inherently slow, there are many software engineering advantages to hosting the implementation of some core service on a remote server. We'll then speak a bit about HTTP to the extent we need it for Assignment 7, which goes out today. And once I do that, I'll advance on to discuss the various data structures, system calls, and Linux library functions needed to implement createClientSocket and createServerSocket. |
May 22 17. HTTP Web Proxy Discussion, and Overview of MapReduce presentation:small Slides, [PDF], Screencast book_1:small Reading: Finish Reading: Bryant & O'Hallaron, Chapter 3 and Section 4.2. Chapter 3 of the reader corresponds to Chapter 11 of the full textbook. code:small Code notepad:small Assignment 7 released (officially Thursday) We will start class by talking about your next assignment, HTTP Web Proxy and Cache. The assignment has many moving parts, and we will delve into some of those parts before you get to coding it up. Once we do that, we'll pivot in a short overview of how MapReduce works. We do so because it's an easily described distributed system that relies on every major topic we've relied on this quarter, and it's a huge victory to fully understand how something you've almost certainly heard of before can be implemented using everything you've learned in CS110. MapReduce is your final assignment, and we want you to be prepared to tackle it, as well. |
May 23/24 Lab / Discussion 06: ThreadPool and Networkingnotepad:small Lab 06: ThreadPool and Networkingnotepad:small Lab 06 Solution:
This week's lab will invest some time understanding some nuances of any operational |
|
9 | May 27 No Class (Memorial Day) |
May 29 18. MapReduce presentation:small Slides, [PDF], Screencast book_1:small Reading: code:small Code notepad:small Assignment 8 released We will re-start our discussion of the MapReduce algorithm, and then discuss your final assignment, which is to implement MapReduce. |
May 30/31 Lab / Discussion 07: proxy Redux notepad:small Lab 07: proxy Reduxnotepad:small Lab 07 Solution: This should be a short discussion section. We want to use the time to have you fill out a survey that will help an educational research project move forward. After we do that, we'll spend about 30 minutes talking aspects of the HTTP proxy you build for us this past week, and how things could have been implemented differently. This is the last week of section! |
|
10 | June 3 19. Principles of System Design presentation:small Slides, [PDF], Screencast Today we'll embark on a recap of all of the general systems design principles we've been exposed to over the course of the quarter and try to make it clear that a system's design is often more intellectually captivating than its implementation. |
June 5 20. Non-blocking I/O presentation:small Slides, [PDF], Screencast book_1:small Reading: code:small Code notepad:small CS110 Practice Final Exams Today, we'll introduce the idea of I/O event-driven programming and how it can be used to complement nonblocking I/O techniques. Linux provides I/O event detection facilities via its epoll (short for event polling) library. We'll spend today's lecture discussing three key epoll functions and the various ways they can be combined with nonblocking descriptors to build singly-threaded web servers that can handle tens of thousands of connections. We will discuss the pros and cons of non-blocking I/O while comparing it to blocking I/O approaches relying on multithreading and multiprocessing. We will finish up non-blocking I/O with two examples: expensive-server.cc and efficient-server.cc. Both methods use a single thread to support a server able to quickly respond to multiple client requests, though the efficient server does so using the epoll system calls to leverage the OS's ability to support waiting for changes to file descriptors. |
||
- | June 10 Final Exam calendar_ok:small Mon. June 10th alarm:small 3:30pm-6:30pm map:small Location: Cubberly Auditorium
|