Job script template
From FarmShare
(Difference between revisions)
(3 intermediate revisions not shown) | |||
Line 4: | Line 4: | ||
#!/bin/bash | #!/bin/bash | ||
# the above line is called a 'hashbang' and sets which shell you run under; you probably want bash | # the above line is called a 'hashbang' and sets which shell you run under; you probably want bash | ||
+ | # it's not real hashbang but is parsed by qsub for the '-S' option | ||
# | # | ||
Line 14: | Line 15: | ||
#$ -l mem_free=2G | #$ -l mem_free=2G | ||
# | # | ||
+ | # on other clusters this memory resource may have a different name | ||
# | # | ||
# set the number of slots, replace '1' with a larger number if needed | # set the number of slots, replace '1' with a larger number if needed | ||
- | #$ -pe | + | #$ -pe shm 1 |
# | # | ||
# on other clusters this pe may have a different name | # on other clusters this pe may have a different name | ||
Line 40: | Line 42: | ||
# | # | ||
- | # pass the current environment variables | + | ##We strongly discourage users from exporting their environment onto the compute node. |
- | #$ -V | + | ##Doing this pretty much means the job is non-reproductible, |
- | # | + | ##because all the required settings are not captured in the job script. |
+ | ## | ||
+ | ## pass the current environment variables | ||
+ | ##$ -V | ||
+ | ## | ||
# join the stdout and stderr streams into one file | # join the stdout and stderr streams into one file | ||
Line 56: | Line 62: | ||
Of course, you can just use the equivalent command line: | Of course, you can just use the equivalent command line: | ||
- | qsub -S /bin/bash -N example_job -l mem_free=2G -pe shm 1 -m ea -M $USER@stanford.edu -w e | + | qsub -S /bin/bash -N example_job -l mem_free=2G -pe shm 1 -m ea -M $USER@stanford.edu -w e -j y /full/path/to/command arg1 arg2 arg3 |
You may need to also have '-b y' if the command is a binary and not a script. | You may need to also have '-b y' if the command is a binary and not a script. |
Latest revision as of 12:19, 5 November 2014
This is meant to be a script you can copy and adapt to get up and running quickly. A similar one appears at the bottom of the man page for 'qsub'.
#!/bin/bash # the above line is called a 'hashbang' and sets which shell you run under; you probably want bash # it's not real hashbang but is parsed by qsub for the '-S' option # # set the name of the job; this will appear in the job listing #$ -N example_job # # # set the maximum memory usage (per slot) #$ -l mem_free=2G # # on other clusters this memory resource may have a different name # # set the number of slots, replace '1' with a larger number if needed #$ -pe shm 1 # # on other clusters this pe may have a different name # # set the maximum run time, hh:mm:ss, default is 48hrs on FarmShare #$ -l h_rt=12:00:00 # # # send mail when job ends or aborts #$ -m ea # # # specify an email address #$ -M $USER@stanford.edu # # check for errors in the job submission options #$ -w e # ##We strongly discourage users from exporting their environment onto the compute node. ##Doing this pretty much means the job is non-reproductible, ##because all the required settings are not captured in the job script. ## ## pass the current environment variables ##$ -V ## # join the stdout and stderr streams into one file #$ -j y # /full/path/to/command arg1 arg2 arg3 ... #make sure that the command above actually doesn't use more memory or CPU than you specified.
Of course, you can just use the equivalent command line:
qsub -S /bin/bash -N example_job -l mem_free=2G -pe shm 1 -m ea -M $USER@stanford.edu -w e -j y /full/path/to/command arg1 arg2 arg3
You may need to also have '-b y' if the command is a binary and not a script.