Assignment 4: stsh -- Stanford Shell
Kudos to Randal Bryant and Dave O'Hallaron of Carnegie Mellon for assignment inspiration and for parts of this handout. Huge thanks to Truman Cranor for writing the command line parser using tools you'll learn in CS143, which you should all take someday because it's amazing.
Due Date: Wednesday, February 12th at 11:59PM
Learning Goals
This assignment gives you a chance to implement a core tool that you've relied on all quarter, and in CS107 - a fully-functional shell! This assignment leverages the simpler shells from lecture and extends them to support process control, job lists, signals, pipelines, and I/O redirection--all while managing the interprocess concurrency problems that make a shell's implementation a genuinely advanced and fascinating systems programming project. There's lots of neat code to write, and we're confident you can pull it off. This assignment will help you:
- appreciate the complexity and functionality of a shell
- further develop your understanding of signals, processes and file descriptors
- hone your skills identifying and eliminating race conditions
Starter Code
All coding should be done on a myth
cluster machine, as that's where we'll be testing all assign4
submissions. Clone the starter project using the command
git clone /afs/ir/class/cs110/repos/assign4/$USER assign4
Note that this repo includes a soft link to a fully functional solution. When in doubt about how something should work, just run the solution to see what it does.
The starter project contains the following - there are a lot of files, but the main ones you should focus on are the header files. For the most part, you do not need to worry about how they are implemented, just how to use them:
stsh.cc
andMakefile
: the implementation file for your shell, and the Makefile to compile it.stsh.cc
is the only file you need to modify for this assignment.conduit.cc
,fpe.cc
,int.cc
,spin.cc
,split.cc
,tstp.cc
: short test programs with various behaviors you can use while testing your shell.stsh-exception.h
: definesSTSHException
, the exception type you should throw for various possible errors, such as those mentioned in the specstsh-job-list.h/cc
: definesSTSHJobList
, a class that represents a list of jobs and exposes functionality to manage themstsh-job.h/cc
: definesSTSHJob
, a class that represents a single job, which could consist of multiple processesstsh-process.h/cc
: definesSTSHProcess
, a class that represents a single processstsh-signal.h/cc
: defines theinstallSignalHandler
function, a more robust version of thesignal
system call to add a signal handlerstsh-parse-utils.h/cc
: defines theparseNumber
function, a helper function to convert a C string to asize_t
stsh-parser
: a folder containing various files implementing parsing logic you can use for your shell. It contains:parser.y
,scanner.h
,scanner.l
: define the syntax and semantics for shell commands. You can ignore these filesstsh-parse-exception.h
: definesSTSHParseException
, the exception type thrown for any issues parsing a string commandstsh-parse.h/cc
: defines thepipeline
andcommand
types to represent raw command line inputsstsh-readline.h/cc
: defines thereadline
andrlinit
functions to read user inputstsh-parse-test.cc
: a provided test program that prompts you for commands and prints out information showing how the parser utilities can parse the command.
samples
: a symbolic link to the shared directory for this assignment. It contains:- various
.txt
script files, in 3 folders: these files are provided test cases for testing your shells with the providedstsh-driver
test program. See later in the spec for more details. stsh-driver
: the test program you can use to feed scripts to your shell to execute as if someone was typing them directly into your shell.stsh-soln
: a provided samplestsh
solution executableSANITY.ini
andsanity.py
: files to configure and run sanity check. You can ignore these files.
- various
tools
: contains symbolic links to thesanitycheck
andsubmit
programs for testing and submitting your work.
Getting Started
Inspect the stsh.cc
file we give you. This is the only file you should need to modify. The core main
function you're provided looks like this:
int main(int argc, char *argv[]) {
pid_t stshpid = getpid();
installSignalHandlers();
rlinit(argc, argv); // configures stsh-readline library so readline works properly
while (true) {
string line;
if (!readline(line)) break;
if (line.empty()) continue;
try {
pipeline p(line);
bool builtin = handleBuiltin(p);
if (!builtin) createJob(p); // createJob is initially defined as a wrapper around cout << p;
} catch (const STSHException& e) {
cerr << e.what() << endl;
if (getpid() != stshpid) exit(0); // if exception is thrown from child process, kill it
}
}
return 0;
}
For shells it's important to define the notion of a job. A job is a collection of 1 or more processes that execute as a result of a single shell command. For instance, ls | sort
is a single shell command, and executing it creates a single job with two processes - one for ls
, and another for sort
. At any time, there can be at most one foreground job, and zero or more background jobs.
The general idiom here is the same used in the shell examples from lecture (first-shell.c
, second-shell.c
and third-shell.c
), except that this version is in C++, and it'll support many more features. The readline
function prompts the user to enter a command and a pipeline
record is constructed around it. readline
and pipeline
(which is different from the pipeline
function you implemented for Assignment 3) are implemented via a suite of files in the stsh-parser
subdirectory, and for the most part you can ignore those implementations. You should, however, be familiar with the type definitions of the command
and pipeline
types:
const size_t kMaxCommandLength = 32;
const size_t kMaxArguments = 32;
struct command {
char command[kMaxCommandLength + 1]; // '\0' terminated
char *tokens[kMaxArguments + 1]; // NULL-terminated array, C strings are all '\0' terminated
};
struct pipeline {
std::string input; // empty if no input redirection file to first command
std::string output; // empty if no output redirection file from last command
std::vector<command> commands;
bool background;
pipeline(const std::string& str);
~pipeline();
};
Check out what the initial version of stsh
is capable of before you add any new code.
Builtin stsh
Commands
Your Assignment 4 shell needs to support a collection of builtin commands, many of which relate to manipulating jobs, that should execute without creating any new processes. These builtins are:
-
quit
, which exits the shell and abandons any jobs that were still running. If there are any extra arguments after thequit
, you can just ignore them. -
exit
, which does the same thing asquit
. Extraneous arguments? Just ignore them. -
fg
, which prompts a stopped job to continue in the foreground or brings a running background job into the foreground.fg
takes a single job number (e.g.fg 3
). If the provided argument isn't a number, there's no such job with that number, or the number of arguments isn't correct, then throw anSTSHException
around an actionable error message and allow your shell to carry on as if you never typed anything. -
bg
, which prompts a stopped job to continue in the background.bg
takes a single job number (e.g.bg 3
). If the provided argument isn't a number, there's no such job with that number, or the number of arguments isn't correct, then throw anSTSHException
around an actionable error message and allow your shell to carry on as if you never typed anything. -
slay
, which is used to terminate a single process (which may have many sibling processes as part of a larger pipeline).slay
takes either one or two numeric arguments. If only one number is provided, it's assumed to be the pid of the process to be killed. If two numbers are provided, the first number is assumed to be the job number, and the second is assumed to be a process index within the job. So,slay 12345
would terminate (in theSIGKILL
sense of terminate) the process with pid 12345.slay 2 0
would terminate the first process in the pipeline making up job 2.slay 13 8
would terminate the 9th process in the pipeline of processes making up job 13. If there are any issues (e.g., the arguments aren't numbers, or they are numbers but identify some nonexistent process, or the argument count is incorrect), then throw anSTSHException
around an actionable error message and allow your shell to carry on as if you never typed anything. -
halt
, which has nearly the same story asslay
, except that its one or two arguments identify a single process that should be halted (but not terminated) if it isn't already stopped. If it's already stopped, then don't do anything and just return. -
cont
, which has the same story asslay
andhalt
, except that its one or two arguments identify a single process that should continue if it isn't already running. If it's already running, then don't do anything and just return. When you prompt a single process to continue, you're asking that it do so in the background. -
jobs
, which prints the job list to the console. If there are any additional arguments, then just ignore them.
quit
, exit
, and jobs
are already implemented for you. You're responsible for implementing the others and ensuring the global job list is appropriately updated.
Setting Milestones
The best approach to implementing anything this complex is to invent a collection of milestones that advance you toward your final goal. Never introduce more than a few lines of code before compiling and confirming that the lines you added do what you expect. To repeat: Never introduce more than a few lines of code before compiling, testing, and confirming that the additional lines do what you expect. View everything you add as a slight perturbation to a working system that slowly evolves into the final product. Understand every single line you add, why it's needed, and why it belongs where you put it.
Here is a sequence of milestones We'd like you to work through in order to get started:
- Descend into the
stsh-parser
directory, read through thestsh-readline.h
andstsh-parse.h
header files for data type definitions and function/method prototypes, typemake
, and play withstsh-parse-test
to gain a sense of whatreadline
and thepipeline
constructor do for you. In general, thereadline
function is likegetline
, except that you can use your up and down arrows to scroll through your history of inputs (neat!). Thepipeline
record defines a bunch of fields that store all of the various commands that chain together to form a pipeline. For example, the textcat < /usr/include/stdio.h | wc > output.txt
would be split into two commands--one for thecat
and a second for thewc
--and populate thevector<command>
in the pipeline with information about each of them. Theinput
andoutput
fields would each be nonempty, and the background field would be false. - Get a pipeline of just one command (e.g.
stsh> sleep 5
) to run in the foreground until it's finished. Rely on a call towaitpid
to stallstsh
until the foreground job finishes. Ignore the job list, ignore theSIGCHLD
handler, don't worry about background jobs, pipelining, or redirection. Don't worry about programs likeemacs
just yet. Focus on these executables instead:ls
,date
,sleep
, as their execution is well understood and predictable. - Establish the process group ID of the job to be the PID of the process by investigating the
setpgid
system call. When you runstsh
from the standard Unix shell, note thatstsh
is running in the foreground. If your shell then creates a child process, by default that child will also be a member of the foreground process group, and you don't want that. Since typing ctrl-c sends aSIGINT
to every process in the foreground group, typing ctrl-c will send aSIGINT
to your shell, as well as to every process that your shell created, and you don't want that. The notion of a group ID isn't all that important yet, because at the moment, a process group consists of only one process. But we'll eventually be dealing with jobs comprised of many processes, and we'll want a single process group id to represent all of them. - Read through
stsh-job-list.h
,stsh-job.h
, andstsh-process.h
to learn how to add a new foreground job to the job list, and how to add a process to that job. Add code that does exactly that to thestsh.cc
file, right after you successfully fork off a new process. After yourwaitpid
call returns, remove the job from the job list. If it helps, inlinecout << joblist;
lines in strategically chosen locations to confirm your new job is being added after fork and being removed after waitpid. - Implement the SIGCHLD handler to reap the resources of a foreground process after it exits, and suspend stsh's main thread of execution using a sigset_t, sigprocmask, sigsuspend, and related functions until an examination of the job list proves that the foreground job is no longer the foreground job. Your call to waitpid should be moved into the SIGCHLD handler, and that should be the only place in your entire solution--even the final one you submit--with a waitpid call.
- Install functions to activate ctrl-z and ctrl-c on your keyboard to stop and kill the foreground process instead of the shell itself. If, for instance, a
sleep 500
is running as the foreground, you may want tokill
the process by pressing ctrl-c. When you press ctrl-c, the OS sends aSIGINT
to your shell, which unless handled will terminate the shell itself. If, however, you install a function to handleSIGINT
, then that handler can (and should) forward theSIGINT
on to the foreground job, should one exist. The story for ctrl-z is similar, except ctrl-z prompts the OS to send your shell aSIGTSTP
. If you install a custom handler to intercept aSIGTSTP
, you can forward theSIGTSTP
on to the foreground job, should one exist. In these cases, the kill function is your friend and will contribute to your implementation of both of these handlers. - Implement the
fg
builtin, which takes a stopped process--stopped presumably because it was running in the foreground at the time you pressed ctrl-z--and prompts it to continue, or it takes a process running in the background and brings it into the foreground. Thefg
builtin takes a job number, translates that job number to a process group ID, and, if necessary, forwards aSIGCONT
on to the process group via a call tokill(-groupID, SIGCONT)
. Right now, process groups consist of just one process, but once you start to support pipelines (e.g.cat words.txt | sort | uniq | wc -l
), you'll want fg to bring the entire job into the foreground, and if all relevant processes are part of the same process group, you can achieve this with a singlekill
call. Of course, if the argument passed to fg isn't a number, or it is but it doesn't identify a real job, then you should throw anSTSHException
that's wrapped around a clear error message saying so. - Update the
SIGCHLD
handler to detect state changes in all processes understsh
's jurisdiction. Processes can exit gracefully, exit abnormally because of some signal, or be terminated by ctrl-c. Processes can halt because of a signal or a ctrl-z. And processes can continue because of a signal or anfg
builtin. The job list needs to remain in sync with the state of your shell's world, andwaitpid
is the perfect function to tell you about changes. You're already familiar withWNOHANG
,WUNTRACED
,WIFEXITED
,WEXITSTATUS
, etc. Read throughwaitpid
's man page (man 2 waitpid
) to get word on some Linux-specific flags and macros that tell you about processes that have started executing again. Buzzwords includeWCONTINUED
andWIFCONTINUED
, so read up on those. - Add support for background jobs. The
pipeline
constructor already searches for trailing &'s and records whether or not the pipeline should be run in the background. If it does, then you still add information about the job to the job list, but you immediately come back to the shell prompt without waiting. - Add support for
slay
,halt
,cont
(the process-oriented commands) andbg
(which prompts all processes in a single job to continue in the background), and use some of the sample user programs we include in your repo (int
,fpe
,tstp
,spin
,split
) to test all of these.
The following are additional milestones you need to hit on your way to a fully functional stsh
.
Each of these bullet points represents something larger.
- Add support for foreground jobs whose leading process (e.g.
cat
,more
,emacs
,vi
, and other executables requiring elaborate control of the console) requires control of the terminal. You should investigate thetcsetpgrp
function as a way of transferring terminal control to a process group, and update your solution to always call it, even if it fails. Iftcsetpgrp(STDIN_FILENO, pgid)
succeeds, then it will return0
. If it fails with a return value of-1
but it setserrno
toENOTTY
, that just means that yourstsh
instance didn't have control of the terminal or the authority to donate it to another process group. (This will happen if your shell is being driven bystsh-driver
) If it fails with a differenterrno
value, then that's a more serious problem that should be identified via anSTSHException
. Ifstsh
succeeds to transferring control to the foreground process group, thenstsh
should take control back when that group falls out of the foreground (perhaps because it exits, or it stops, or something else). - Add support for pipelines consisting of two processes (i.e. binary pipelines, e.g
cat /usr/include/stdio.h | wc
). Make sure that the standard output of the first is piped to the standard input of the second, and that each of the two processes are part of the same process group, using a process group ID that is identical to the pid of the leading process. Thepipeline
function from Assignment 3 (which, again, is different from thepipeline
type here) was a paying-it-forward nod to the basic pipeline functionality needed right here. You needn't go nuts on the error checking: You can assume that all system calls succeed, with the exception ofexecvp
, which may fail because of user error (misspelled executable name, file isn't an executable, lack of permissions, etc.). You might want to include more error checking if it helps you triage bugs, assert the truth of certain expectations during execution, and arrive at a working product more quickly, but do all that because it's good for you and not because you're trying to make us happy. (Hint: theconduit
user program we dropped in your repo starts to become useful as soon as you deal with nontrivial pipelines. Try typingecho 12345 | ./conduit --delay 1
in the standard shell to see what happens, and try to replicate the behavior instsh
.) - Once you get your head around pipelines of one and two processes, work on getting arbitrarily long pipeline chains to do the right thing. So, if the user types in
echo 12345 | ./conduit --delay 1 | ./conduit | ./conduit
, four processes are created, each with their own pid, and all in a process group whose process group id is that of the leading process (in this case, the one runningecho
).echo
's standard out feeds the standard in of the firstconduit
, whose standard out feeds into the standard in of the secondconduit
, which pipes its output to the standard input of the lastconduit
, which at last prints to the console. We're calling this out separately from the binary pipeline discussed in the previous bullet point, because the code needed to realize pipelines of three or more doesn't have as much in common with the binary pipeline code as you might think. Yes, there arepipe
,setpgid
,dup2
,close
, andexecvp
calls, but figuring out how to get one of the inner processes to redefine whatSTDIN_FILENO
andSTDOUT_FILENO
are connected to is very tricky, and this trickiness doesn't present itself in the binary pipeline. - Finally, add support for input and output redirection via
<
and>
(e.g.cat < /usr/include/stdio.h | wc > output.txt
). The names of input and output redirection files are surfaced by thepipeline
constructor, and if there is a nonempty string in the input and/or output fields of the pipeline record, that's your signal that input redirection, output redirection, or both are needed. If the file you're writing to doesn't exist, create it, and go with 0644 (with the leading zero) as the octal constant to establish therw-r--r--
permission. If the output file you're redirecting to already exists, then truncate it using theO_TRUNC
flag. Note that input redirection always impacts where the leading process draws its input from and that output redirection influences where the caboose process publishes its output. Sometimes those two processes are the same, and sometimes they are different. Typeman 2 open
for the full skinny on theopen
system call and a reminder of what flags can be bitwise-OR'ed together for the second argument.
Shell Driver
The stsh-driver
program in samples
executes stsh
as a child process, sends it commands and signals as directed by a trace file, and allows the shell to print to standard output and error as it normally would. The stsh
process is driven by the stsh-driver
, which is why we call stsh-driver
a driver.
Go ahead and type ./stsh-driver -h
to learn how to use it:
Usage: ./stsh-driver [-hv] -t <trace> -s <shell> [-a <args>]
Options:
-h Print this message
-v Output more information
-t <trace> Trace file
-s <shell> Version of stsh to test
-a <args> Arguments to pass through to stsh implementation
myth62$
We've also provided several trace files that you can feed to the driver to test your stsh
. If you drill into your repo's samples
symlink, you'll arrive at /usr/class/cs110/samples/assign4
, which includes not only a copy of our own stsh
solution, but also a directory of shared trace files called scripts
. Within scripts
, you'll see simple
, intermediate
, and advanced
subdirectories, each of which contains one or more trace files you can use for testing.
Run the shell driver on your own shell using trace file bg-spin.txt by typing this:
myth62$ ./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./stsh -a "--suppress-prompt --no-history”
(the -a "--suppress-prompt --no-history"
argument tells stsh
to not emit a prompt or to use the fancy readline
history stuff, since it confuses the sanitycheck
and autograder stuff).
Similarly, to compare your results with those generated by my own solution, you can run the driver on ./stsh_soln
shell by typing:
myth62$ ./stsh-driver -t ./samples/scripts/simple/bg-spin.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"
The neat thing about the trace files is that they generate the same output you would have gotten had you run your shell interactively (except for an initial comment identifying the output as something generated via stsh-driver
). For example:
myth62$ ./stsh-driver -t ./samples/scripts/advanced/simple-pipeline-1.txt -s ./samples/stsh_soln -a "--suppress-prompt --no-history"
# Trace: simple-pipeline-1
# ------------------------
# Exercises support for pipes via a foreground pipeline with
# just two processes.
stsh> /bin/echo abc | ./conduit --count 3
aaabbbcccdddeeefffggghhhiiijjj
The process IDs listed as part of a trace's output will be different from run to run, but otherwise your output should be exactly the same as that generated by my solution.
Tips and Tidbits
- Read every word of Chapters 1 and 2 in your B&O reader (or Chapters 8 and 10 in the full textbook).
- Your implementation should be in C++ unless there's no way to avoid it. By that, we mean you should use C++ strings unless you interface with a system call that requires C strings, use
cout
instead ofprintf
, and so forth. Understand that when you redefine whereSTDOUT_FILENO
directs its text, that impacts whereprintf
(which you're not using) andcout
(which you are) place that text. - We have reasonably low expectations on error checking for this assignment, but we do have some. We want you to focus on how to leverage system directives to get a fully functional shell working, but we don't want you to examine the return value of every system call when it's more or less impossible for them to fail. In general, your error checking should guard against user error--attempts to invoke a nonexistent executable, providing out-of-range arguments to command-line built-ins, and so forth. In some cases--and we point out those cases in this handout--you do need to check the return value of a system call or two, because sometimes system call “failure” (the air quotes are intentional) isn't a true failure. You've seen situations where
waitpid
returns-1
even though everything was fine, and that happens with a few other system calls as well. - All unused file descriptors should be closed.
- When creating a
pipeline
--whether it consists of a single process, two processes, or many processes--you need to ensure that all of the pipeline processes are in the same process group, but in a different process group than thestsh
instance is. To support this, you should investigatesetpgid
to see how all of the sibling processes in a pipeline can be added to a new process group whose id is the same as the pid of the leading process. So, if a pipeline consists of four processes with pids 4004, 4005, 4007, and 4008, they would all be placed in a process group with an ID of 4004. By doing this, you can send a signal to every process in a group using thekill
function, where the first argument is the negative of the process group id (e.g.kill(-4004, SIGTSTP)
) . In order to avoid some race conditions, you should callsetpgid
in the parent and in each of the children. Why does the parent need to call it? Because it needs to ensure the process group exists before it advances on to add other processes in the pipeline to the same group. Why do child processes need to call it? Because if the child relies on the parent to do it, the child mayexecvp
(and invalidate its own pid as a validsetpgid
argument) before the parent gets around to it. Race conditions: deep stuff. - You do not need to support pipelines or redirection for any of the builtins. In principle, you should be able to, say, redirect the output of
jobs
to a file, or pipe the output to another process, but you don't need to worry about this for Assignment 4. Our tests will never mix builtins with pipes and redirection. - Every time your
SIGCHLD
handler learns of a stopped or terminated process, it's possible the surrounding job is impacted. Investigate theSTSHJobList::synchronize(STSHJob& job)
method, which scrutinizes the supplied job to see if it should be removed from the job list or if it should demote itself to the background. - We've defined one global variable for you (the
STSHJobList
calledjoblist
). You may not declare any others (though you may declare constants). - Make sure pipelines of multiple commands work even when the processes spawned on their behalf ignore standard in and standard out, e.g.
sleep 10 | sleep 10 | sleep 10 | sleep 10 > empty-output.txt
should run just fine--the entirepipeline
lasts for about 10 seconds, andempty-output.txt
is created or truncated and ends up being 0 bytes. - When an entire pipeline is run as a background job, make sure you print out a job summary that's consistent with the following output:
stsh> sleep 10 | sleep 10 | sleep 10 &
[1] 27684 27685 27686
- We will be testing your submission against many more traces than the ones we provide, so be sure to define lots and lots of additional traces to verify your shell is unbreakable!
Submitting And Grading
Once you're done, you should test all of your work as you normally would and then run the submit script by typing ./tools/submit
.
Your assignments will be rigorously tested using the tests we expose via sanitycheck plus a whole set of others. I reserve the right to add tests and change point values if during grading I find some features aren't being exercised, but we're fairly certain the breakdown presented below will be a very good approximation regardless.
- Basic Tests (47 points)
- Clean Build: 2 points
- Ensure that most basic of builtins (
quit
,exit
,jobs
) work properly: 15 points - Ensure that basic foreground and background processes work: 10 points
-
Ensure that
SIGINT
andSIGTSTP
handlers work on single process pipelines: 20 points -
Intermediate Tests (50 points)
- Ensure that
bg
andfg
builtins are properly supported: 15 points - Ensure that
SIGINT
andSIGTSTP
signals play well with stopped processes, restarted processes, andfg
andbg
builtins: 30 points -
Confirm error checking to guard against user error (e.g. mistyped commands, mistyped builtins, etc): 5 points (the one test is exposed via sanity)
-
Advanced Tests (70 points)
- Ensure that pipelines with two processes work well: 15 points
- Ensure that pipelines with three or more processes work well: 25 points
- Ensure redirection works for pipelines of just 1 process: 15 points
- Ensure redirection works for pipelines with two or more processes: 15 points
Icons by Piotr Kwiatkowski