Assignment 4: stsh -- Stanford Shell


Due: Mon Feb 22 11:59 pm
Late submissions accepted until Wed Feb 24 11:59 pm

Kudos to Randal Bryant and Dave O'Hallaron of Carnegie Mellon for assignment inspiration and for parts of this handout. Huge thanks to Truman Cranor for writing the command line parser using tools you'll learn in CS143, which you should all take someday because it's amazing.

Learning Goals

This assignment gives you a chance to implement a core tool that you've relied on all quarter, and in CS107 - a fully-functional shell! This assignment leverages the simpler shells from lecture and extends them to support process control, job lists, signals, pipelines, and I/O redirection--all while managing the interprocess concurrency problems that make a shell's implementation a genuinely advanced and fascinating systems programming project. There's lots of neat code to write, and we're confident you can pull it off. This assignment will help you:

  • appreciate the complexity and functionality of a shell
  • further develop your understanding of signals, processes and file descriptors
  • hone your skills identifying and eliminating race conditions

Starter Code

All coding should be done on a myth cluster machine, as that's where we'll be testing all assign4 submissions. Clone the starter project using the command

    git clone /afs/ir/class/cs110/repos/assign4/$USER assign4

Note that this repo includes a soft link to a fully functional solution. When in doubt about how something should work, just run the solution to see what it does.

The starter project contains the following - there are a lot of files, but the main ones you should focus on are the header files. For the most part, you do not need to worry about how they are implemented, just how to use them:

  • stsh.cc and Makefile: the implementation file for your shell, and the Makefile to compile it. stsh.cc is the only file you need to modify for this assignment.
  • conduit.cc, fpe.cc, int.cc, spin.cc, split.cc, tstp.cc: short test programs with various behaviors you can use while testing your shell.
  • stsh-exception.h: defines STSHException, the exception type you should throw for various possible errors, such as those mentioned in the spec
  • stsh-job-list.h/cc: defines STSHJobList, a class that represents a list of jobs and exposes functionality to manage them
  • stsh-job.h/cc: defines STSHJob, a class that represents a single job, which could consist of multiple processes
  • stsh-process.h/cc: defines STSHProcess, a class that represents a single process
  • stsh-signal.h/cc: defines the installSignalHandler function, a more robust version of the signal system call to add a signal handler
  • stsh-parse-utils.h/cc: defines the parseNumber function, a helper function to convert a C string to a size_t
  • stsh-parser: a folder containing various files implementing parsing logic you can use for your shell. It contains:
    • parser.y, scanner.h, scanner.l: define the syntax and semantics for shell commands. You can ignore these files (though feel free to look at them for interesting parsing details)
    • stsh-parse-exception.h: defines STSHParseException, the exception type thrown for any issues parsing a string command
    • stsh-parse.h/cc: defines the pipeline and command types to represent raw command line inputs
    • stsh-readline.h/cc: defines the readline and rlinit functions to read user input
    • stsh-parse-test.cc: a provided test program that prompts you for commands and prints out information showing how the parser utilities can parse the command.
  • samples: a symbolic link to the shared directory for this assignment. It contains:
    • various .txt script files, in 3 folders: these files are provided test cases for testing your shells with the provided stsh-driver test program. See later in the spec for more details.
    • stsh-driver: the test program you can use to feed scripts to your shell to execute as if someone was typing them directly into your shell.
    • stsh-soln: a provided sample stsh solution executable
    • SANITY.ini and sanity.py: files to configure and run sanity check. You can ignore these files.
  • tools: contains symbolic links to the sanitycheck and submit programs for testing and submitting your work.

Getting Started

Inspect the stsh.cc file we give you. This is the only file you should need to modify. The core main function you're provided looks like this:

int main(int argc, char *argv[]) {
  pid_t stshpid = getpid();
  installSignalHandlers();
  rlinit(argc, argv); // configures stsh-readline library so readline works properly
  while (true) {
   string line;
   if (!readline(line)) break;
   if (line.empty()) continue;
   try {
    pipeline p(line);
    bool builtin = handleBuiltin(p);
    if (!builtin) createJob(p); // createJob is initially defined as a wrapper around cout << p;
   } catch (const STSHException& e) {
    cerr << e.what() << endl;
    if (getpid() != stshpid) exit(0); // if exception is thrown from child process, kill it
   }
  }

  return 0;
}

For shells it's important to define the notion of a job. A job is a collection of 1 or more processes that execute as a result of a single shell command. For instance, ls | sort is a single shell command, and executing it creates a single job with two processes - one for ls, and another for sort. At any time, there can be at most one foreground job, and zero or more background jobs.

The general idiom here is the same used in the shell examples from lecture, except that this version is in C++, and it'll support many more features. The readline function prompts the user to enter a command and a pipeline record is constructed around it. readline and pipeline (which is different from the pipeline function you implemented for Assignment 3) are implemented via a suite of files in the stsh-parser subdirectory, and for the most part you can ignore those implementations. You should, however, be familiar with the type definitions of the command and pipeline types:

const size_t kMaxCommandLength = 32;
const size_t kMaxArguments = 32;
struct command {
  char command[kMaxCommandLength + 1]; // '\0' terminated
  char *tokens[kMaxArguments + 1]; // NULL-terminated array, C strings are all '\0' terminated
};

struct pipeline {
  std::string input;  // the empty string if no input redirection file to first command
  std::string output; // the empty string if no output redirection file from last command
  std::vector<command> commands;
  bool background;

  pipeline(const std::string& str); // constructor that parses an input string
  ~pipeline();
};

Check out what the initial version of stsh is capable of before you add any new code.

Builtin stsh Commands

Your Assignment 4 shell needs to support a collection of builtin commands, many of which relate to manipulating jobs, that should execute without creating any new processes. These builtins are:

  • quit, which exits the shell and abandons any jobs that were still running. If there are any extra arguments after the quit, you can just ignore them.

  • exit, which does the same thing as quit. Extraneous arguments? Just ignore them.

  • fg, which prompts a stopped job to continue in the foreground or brings a running background job into the foreground. fg takes a single job number (e.g. fg 3). If the provided argument isn't a number, there's no such job with that number, or the number of arguments isn't correct, then throw an STSHException around an actionable error message and allow your shell to carry on as if you never typed anything.

  • bg, which prompts a stopped job to continue in the background. bg takes a single job number (e.g. bg 3). If the provided argument isn't a number, there's no such job with that number, or the number of arguments isn't correct, then throw an STSHException around an actionable error message and allow your shell to carry on as if you never typed anything.

  • slay, which is used to terminate a single process (which may have many sibling processes as part of a larger pipeline). slay takes either one or two numeric arguments. If only one number is provided, it's assumed to be the pid of the process to be killed. If two numbers are provided, the first number is assumed to be the job number, and the second is assumed to be a process index within the job. So, slay 12345 would terminate (in the SIGKILL sense of terminate) the process with pid 12345. slay 2 0 would terminate the first process in the pipeline making up job 2. slay 13 8 would terminate the 9th process in the pipeline of processes making up job 13. If there are any issues (e.g., the arguments aren't numbers, or they are numbers but identify some nonexistent process, or the argument count is incorrect), then throw an STSHException around an actionable error message and allow your shell to continue like the error never happened.

  • halt, which has nearly the same story as slay, except that its one or two arguments identify a single process that should be halted (but not terminated) if it isn't already stopped. If it's already stopped, then don't do anything and just return.

  • cont, which has the same story as slay and halt, except that its one or two arguments identify a single process that should continue if it isn't already running. If it's already running, then don't do anything and just return. When you prompt a single process to continue, you're asking that it do so in the background.

  • jobs, which prints the job list to the console. If there are any additional arguments, then just ignore them.

quit, exit, and jobs are already implemented for you. You're responsible for implementing the others and ensuring the global job list is appropriately updated.

Setting Milestones

The best approach to implementing anything this complex is to invent a collection of milestones that advance you toward your final goal. Never introduce more than a few lines of code before compiling and confirming that the lines you added do what you expect. To repeat: Never introduce more than a few lines of code before compiling, testing, and confirming that the additional lines do what you expect. View everything you add as a slight perturbation to a working system that slowly evolves into the final product. Understand every single line you add, why it's needed, and why it belongs where you put it.

Here is a sequence of milestones we'd like you to work through in order to get started:

  1. Descend into the stsh-parser directory, read through the stsh-readline.h and stsh-parse.h header files for data type definitions and function/method prototypes, type make, and play with stsh-parse-test to gain a sense of what readline and the pipeline constructor do for you. In general, the readline function is like getline, except that you can use your up and down arrow keys to scroll through your input history (neat!). The pipeline record defines a bunch of fields that store all of the various commands that chain together to form a pipeline. For example, the text cat < /usr/include/stdio.h | wc > output.txt would be split into two commands--one for the cat and a second for the wc--and populate the vector<command> in the pipeline with information about each of them. The input and output fields would each be nonempty, and the background field would be false.
  2. Get a pipeline of just one command (e.g. stsh> sleep 5) to run in the foreground until it's finished. Rely on a call to waitpid to stall stsh until the foreground job finishes. Ignore the job list, ignore the SIGCHLD handler, don't worry about background jobs, pipelining, or redirection. Don't worry about programs like emacs just yet. Focus on straightforward, predictable executables like these instead: ls, date, sleep, as their execution is well understood and predictable.
  3. Establish the process group ID of the job to be the PID of the process by investigating the setpgid system call. When you run stsh from the standard Unix shell, note that stsh is running in the foreground. If your shell then creates a child process, by default that child will also be a member of the foreground process group. Since typing ctrl-c sends a SIGINT to every process in the foreground group, typing ctrl-c will send a SIGINT to your shell, as well as to every process that your shell created, and you don't want that. The notion of a group ID isn't all that important yet, because at the moment, a process group consists of only one process. But we'll eventually be dealing with jobs comprised of many processes, and we'll want a single process group id to represent all of them.
  4. Read through stsh-job-list.h, stsh-job.h, and stsh-process.h to learn how to add a new foreground job to the job list, and how to add a process to that job. Add code that does exactly that to the stsh.cc file, right after you successfully fork off a new process. After your waitpid call returns, remove the job from the job list. If it helps, temporarily inline cout << joblist; lines in strategically chosen locations to confirm your new job is being added after fork and being removed after waitpid.
  5. Implement the SIGCHLD handler to reap the resources of a foreground process after it exits, and suspend stsh's main thread of execution using a sigset_t, sigprocmask, sigsuspend, and related functions until an examination of the job list proves that the foreground job is no longer the foreground job. Your call to waitpid should be moved into the SIGCHLD handler, and that should be the only place in your entire solution--even the final one you submit--with a waitpid call.
  6. Install functions to activate ctrl-z and ctrl-c on your keyboard to stop and kill the foreground process instead of the shell itself. If, for instance, a sleep 500 is running as the foreground, you may want to kill the process by pressing ctrl-c. When you press ctrl-c, the OS sends a SIGINT to your shell, which unless handled will terminate the shell itself. If, however, you install a function to handle SIGINT, then that handler can (and should) forward the SIGINT on to the foreground job, should one exist. The story for ctrl-z is similar, except ctrl-z prompts the OS to send your shell a SIGTSTP. If you install a custom handler to intercept a SIGTSTP, you can forward the SIGTSTP on to the foreground job, should one exist. In these cases, the killpg function is your friend and will contribute to your implementation of both of these handlers. Right now, process groups consist of just one process, but once you start to support pipelines (e.g. cat words.txt | sort | uniq | wc -l), you'll want these signal handlers to send a signal to all processes in a job, and if all relevant processes are part of the same process group, you can achieve this with a single killpg call.
  7. Implement the fg builtin, which takes a stopped process--stopped presumably because it was running in the foreground at the time you pressed ctrl-z--and prompts it to continue, or it takes a process running in the background and brings it into the foreground. The fg builtin takes a job number, translates that job number to a process group ID, and forwards a SIGCONT on to the process group via a call to killpg(groupID, SIGCONT). Again, right now, process groups consist of just one process, but once you start to support pipelines (e.g. cat words.txt | sort | uniq | wc -l), you'll want fg to bring the entire job into the foreground, which killpg can help with. Of course, if the argument passed to fg isn't a number, or it is but it doesn't identify a real job, then you should throw an STSHException that's wrapped around a clear error message saying so.
  8. Update the SIGCHLD handler to detect state changes in all processes under stsh's jurisdiction. Processes can exit gracefully, exit abnormally because of some signal, or be terminated by ctrl-c. Processes can halt because of a signal or a ctrl-z. And processes can continue because of a signal or an fg builtin. The job list needs to remain in sync with the state of your shell's world, and waitpid is the perfect function to tell you about changes. You're already familiar with WNOHANG, WUNTRACED, WIFEXITED, WEXITSTATUS, etc. Read through waitpid's man page (man 2 waitpid) to get word on some Linux-specific flags and macros that tell you about processes that have started executing again. Buzzwords include WCONTINUED and WIFCONTINUED, so read up on those.
  9. Add support for background jobs. The pipeline constructor already searches for trailing &'s and records whether or not the pipeline should be run in the background. If it does, then you still add information about the job to the job list, but you immediately come back to the shell prompt without waiting.
  10. Add support for slay, halt, cont (the process-oriented commands) and bg (which prompts all processes in a single job to continue in the background), and use some of the sample user programs we include in your repo (int, fpe, tstp, spin, split) to test all of these.

The following are additional milestones you need to hit on your way to a fully functional stsh.

Each of these bullet points represents something larger.

  • Add support for foreground jobs whose leading process requires control of the terminal (e.g. cat, more, emacs, vi, and other executables requiring elaborate control of the console). You should investigate the tcsetpgrp function as a way of transferring terminal control to a process group, and update your solution to always call it in both the parent and the foreground child, even if it fails (it turns out that both the parent and child need to call tcsetpgrp, as opposed to just the parent, in order to avoid race conditions where the child execvps before the function is called, and also transfer terminal control from the parent to the child. getpgid may also be helpful. Note: you must block, handle, or ignore SIGTTOU in the child process before calling tcsetpgrp. The good news is that we've already provided this to you in the starter code, so you don't have to worry about it). If tcsetpgrp(STDIN_FILENO, pgid) succeeds, then it will return 0. If it fails, it will return -1 and you should throw an STSHException. If stsh succeeds to transferring control to the foreground process group, then stsh should take control back when that group falls out of the foreground (perhaps because it exits, or it stops, or something else).
  • Add support for pipelines consisting of two processes (i.e. binary pipelines, e.g cat /usr/include/stdio.h | wc). Make sure that the standard output of the first is piped to the standard input of the second, and that each of the two processes are part of the same process group, using a process group ID that is identical to the pid of the leading process. Make sure tcsetpgrp is called in all of a foreground job's processes. The pipeline function from Assignment 3 (which, again, is different from the pipeline type here) was a nod to the basic pipeline functionality needed right here. You needn't do much error checking: You can assume that all system calls succeed, with the exception of execvp, which may fail because of user error (misspelled executable name, file isn't an executable, lack of permissions, etc.). You might want to include more error checking if it helps you triage bugs, assert the truth of certain expectations during execution, and arrive at a working product more quickly, but do all that because it's good for you and not because you're trying to make us happy. (Hint: the conduit user program we dropped in your repo starts to become useful as soon as you deal with nontrivial pipelines. Try typing echo 12345 | ./conduit --delay 1 in the standard shell to see what happens, and try to replicate the behavior in stsh.)
  • Once you get your head around pipelines of one and two processes, work on getting arbitrarily long pipeline chains to do the right thing. So, if the user types in echo 12345 | ./conduit --delay 1 | ./conduit | ./conduit, four processes are created, each with their own pid, and all in a process group whose process group id is that of the leading process (in this case, the one running echo). echo's standard out feeds the standard in of the first conduit, whose standard out feeds into the standard in of the second conduit, which pipes its output to the standard input of the last conduit, which at last prints to the console. We're calling this out separately from the binary pipeline discussed in the previous bullet point, because the code needed to realize pipelines of three or more doesn't have as much in common with the binary pipeline code as you might think. Yes, there are pipe, setpgid, dup2, close, and execvp calls, but figuring out how to get one of the inner processes to redefine what STDIN_FILENO and STDOUT_FILENO are connected to is very tricky, and this trickiness doesn't present itself in the binary pipeline.
  • Finally, add support for input and output redirection via < and > (e.g. cat < /usr/include/stdio.h | wc > output.txt). The names of input and output redirection files are surfaced by the pipeline constructor, and if there is a nonempty string in the input and/or output fields of the pipeline record, that's your signal that input redirection, output redirection, or both are needed. If the file you're writing to doesn't exist, create it, and go with 0644 (with the leading zero) as the octal constant to establish the rw-r--r-- permission. If the output file you're redirecting to already exists, then truncate it using the O_TRUNC flag. Note that input redirection always impacts where the leading process draws its input from and that output redirection influences where the last process publishes its output. Sometimes those two processes are the same, and sometimes they are different. Type man 2 open for the full story on the open system call and a reminder of what flags can be bitwise-OR'ed together for the second argument.

Shell Driver

The stsh-driver program in samples executes stsh as a child process, sends it commands and signals as directed by a trace file, and allows the shell to print to standard output and error as it normally would. The stsh process is driven by the stsh-driver, which is why we call stsh-driver a driver.

Go ahead and type ./stsh-driver -h to learn how to use it:

Usage: ./stsh-driver [-hv] -t <trace> -s <shell> [-a <args>]
Options:
  -h     Print this message
  -v     Output more information
  -t <trace> Trace file
  -s <shell> Version of stsh to test
  -a <args> Arguments to pass through to stsh implementation
myth62$

We've also provided several trace files that you can feed to the driver to test your stsh. If you drill into your repo's samples symlink, you'll arrive at /usr/class/cs110/samples/assign4, which includes not only a copy of our own stsh solution, but also a directory of shared trace files called scripts. Within scripts , you'll see simple, intermediate, and advanced subdirectories, each of which contains one or more trace files you can use for testing.

Run the shell driver on your own shell using trace file bg-spin.txt by typing this:

myth62$ ./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./stsh -a "--suppress-prompt --no-history"

(the -a "--suppress-prompt --no-history" argument tells stsh to not emit a prompt or to use the fancy readline history stuff, since it confuses the sanitycheck and autograder stuff).

Similarly, to compare your results with those generated by our solution, you can run the driver on ./stsh_soln shell by typing:

myth62$ ./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"

The neat thing about the trace files is that they generate the same output you would have gotten had you run your shell interactively (except for an initial comment identifying the output as something generated via stsh-driver). For example:

myth62$ ./stsh-driver -t ./samples/scripts/advanced/simple-pipeline-1.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"
# Trace: simple-pipeline-1
# ------------------------
# Exercises support for pipes via a foreground pipeline with
# just two processes.
stsh> /bin/echo abc | ./conduit --count 3
aaabbbcccdddeeefffggghhhiiijjj

The process IDs listed as part of a trace's output will be different from run to run, but otherwise your output should be exactly the same as that generated by our solution.

The trace files can contain regular shell commands as well as a few special extra commands (check out the provided trace files for some examples):

  • # - anything starting with # is considered a comment and ignored
  • TSTP - sends a SIGTSTP to the shell
  • INT - sends a SIGINT to the shell
  • QUIT - sends a SIGQUIT to the shell
  • KILL - sends a SIGKILL to the shell
  • SLEEP X - sleeps for X seconds
  • WAIT - waits for shell to exit
  • CLOSE - closes shells STDIN

Tips and Tidbits

  • Read every word of Chapters 1 and 2 in your B&O reader (or Chapters 8 and 10 in the full textbook).
  • Your implementation should be in C++ unless there's no way to avoid it. By that, we mean you should use C++ strings unless you interface with a system call that requires C strings, use cout instead of printf, and so forth. Understand that when you redefine where STDOUT_FILENO directs its text, that impacts where printf (which you're not using) and cout (which you are) place that text.
  • We have reasonably low expectations on error checking for this assignment, but we do have some. We want you to focus on how to leverage system directives to get a fully functional shell working, but we don't want you to examine the return value of every system call when it's more or less impossible for them to fail. In general, your error checking should guard against user error--attempts to invoke a nonexistent executable, providing out-of-range arguments to command-line built-ins, and so forth. In some cases--and we point out those cases in this handout--you do need to check the return value of a system call or two, because sometimes system call “failure” (the air quotes are intentional) isn't a true failure. You've seen situations where waitpid returns -1 even though everything was fine, and that happens with a few other system calls as well.
  • All unused file descriptors should be closed.
  • When creating a pipeline--whether it consists of a single process, two processes, or many processes--you need to ensure that all of the pipeline processes are in the same process group, but in a different process group than the stsh instance is. To support this, you should investigate setpgid to see how all of the sibling processes in a pipeline can be added to a new process group whose id is the same as the pid of the leading process. So, if a pipeline consists of four processes with pids 4004, 4005, 4007, and 4008, they would all be placed in a process group with an ID of 4004. By doing this, you can send a signal to every process in a group using the killpg function, where the first argument is the process group id (e.g. killpg(4004, SIGTSTP)) . In order to avoid some race conditions, you should call setpgid in the parent and in each of the children. Why does the parent need to call it? Because it needs to ensure the process group exists before it advances on to add other processes in the pipeline to the same group. Why do child processes need to call it? Because if the child relies on the parent to do it, the child may execvp (and invalidate its own pid as a valid setpgid argument) before the parent gets around to it. Race conditions: deep stuff.
  • You do not need to support pipelines or redirection for any of the builtins. In principle, you should be able to, say, redirect the output of jobs to a file, or pipe the output to another process, but you don't need to worry about this for Assignment 4. Our tests will never mix builtins with pipes and redirection.
  • Every time your SIGCHLD handler learns of a stopped or terminated process, it's possible the surrounding job is impacted. Investigate the STSHJobList::synchronize(STSHJob& job) method, which scrutinizes the supplied job to see if it should be removed from the job list or if it should demote itself to the background.
  • We've defined one global variable for you (the STSHJobList called joblist). You may not declare any others (though you may declare constants).
  • Make sure pipelines of multiple commands work even when the processes spawned on their behalf ignore standard in and standard out, e.g. sleep 10 | sleep 10 | sleep 10 | sleep 10 > empty-output.txt should run just fine--the entire pipeline lasts for about 10 seconds, and empty-output.txt is created or truncated and ends up being 0 bytes.
  • When an entire pipeline is run as a background job, make sure you print out a job summary that's consistent with the following output:
stsh> sleep 10 | sleep 10 | sleep 10 &
[1] 27684 27685 27686
  • We will be testing your submission against many more traces than the ones we provide, so be sure to define lots and lots of additional traces to verify your shell is unbreakable!

Submitting And Grading

Once you're done, you should test all of your work as you normally would and then run the submit script by typing ./tools/submit.

Your assignments will be rigorously tested using the tests we expose via sanitycheck plus a whole set of others. We reserve the right to add tests and change point values if during grading we find some features aren't being exercised, but we're fairly certain the breakdown presented below will be a very good approximation regardless.

  • Basic Tests (47 points)

    • Clean Build: 2 points
    • Ensure that most basic of builtins (quit, exit, jobs) work properly: 15 points
    • Ensure that basic foreground and background processes work: 10 points
    • Ensure that SIGINT and SIGTSTP handlers work on single process pipelines: 20 points
  • Intermediate Tests (50 points)

    • Ensure that bg and fg builtins are properly supported: 15 points
    • Ensure that SIGINT and SIGTSTP signals play well with stopped processes, restarted processes, and fg and bg builtins: 30 points
    • Confirm error checking to guard against user error (e.g. mistyped commands, mistyped builtins, etc): 5 points (the one test is exposed via sanity)
  • Advanced Tests (70 points)

    • Ensure that pipelines with two processes work well: 15 points
    • Ensure that pipelines with three or more processes work well: 25 points
    • Ensure redirection works for pipelines of just one process: 15 points
    • Ensure redirection works for pipelines with two or more processes: 15 points



© Stanford 2021 | Website design based on a design by Chris Piech.
Icons by Piotr Kwiatkowski