Main Page

From FarmShare

(Difference between revisions)
Jump to: navigation, search
(barley info)
Line 5: Line 5:
The machines are available for anyone with a SUNetID.  Simply "ssh corn.stanford.edu" with your SUNetID credentials.  The DNS name "corn.stanford.edu" actually goes to a load balancer and it will connect you to a particular corn machine that has relatively low load.
The machines are available for anyone with a SUNetID.  Simply "ssh corn.stanford.edu" with your SUNetID credentials.  The DNS name "corn.stanford.edu" actually goes to a load balancer and it will connect you to a particular corn machine that has relatively low load.
-
The "barley" machines are only accessible via a resource manager (currently Open GridEngine).  You'll need to ssh to corn-image-new and a directory will be created for you on local (shared among barley) storage. e-mail barley-alpha for more info.
+
The "barley" machines are only accessible via a resource manager (currently Open GridEngine).  You'll need to ssh to corn-image-new and a directory will be created for you on local (shared among barley) storage. E-mail the barley-alpha mailing list for more info.
=corn info=
=corn info=
Line 11: Line 11:
*VNC help: https://itservices.stanford.edu/service/unixcomputing/unix/vnc  
*VNC help: https://itservices.stanford.edu/service/unixcomputing/unix/vnc  
*Q? File HelpSU: http://helpsu.stanford.edu/
*Q? File HelpSU: http://helpsu.stanford.edu/
 +
*Future vision as of summer 2010: http://itservices.stanford.edu/strategy/sysadmin/timeshare
Line 18: Line 19:
So the procedure would look something like this:
So the procedure would look something like this:
-
# log into corn-image-new: "ssh corn-image-new.stanford.edu"
+
# log into corn-image-new: "ssh sunetid@corn-image-new.stanford.edu"
-
# cd to /mnt/glusterfs/<your username>
+
# cd to /mnt/glusterfs/<your username> (or wait 5mins if it doesn't exist yet)
# write a job script: "$EDITOR test_job.script"
# write a job script: "$EDITOR test_job.script"
##see 'man qsub' for more info
##see 'man qsub' for more info

Revision as of 12:52, 27 October 2011

This wiki is intended for the users of the Stanford shared research computing resources. E.g. the "cardinal" and "corn" and "barley" machines.

Contents

How to connect

The machines are available for anyone with a SUNetID. Simply "ssh corn.stanford.edu" with your SUNetID credentials. The DNS name "corn.stanford.edu" actually goes to a load balancer and it will connect you to a particular corn machine that has relatively low load.

The "barley" machines are only accessible via a resource manager (currently Open GridEngine). You'll need to ssh to corn-image-new and a directory will be created for you on local (shared among barley) storage. E-mail the barley-alpha mailing list for more info.

corn info


barley info

To start using the new machines, you can check out the man page for 'sge_intro' or the 'qhost', 'qstat', 'qsub' and 'qdel' commands on machine 'corn-image-new'.

So the procedure would look something like this:

  1. log into corn-image-new: "ssh sunetid@corn-image-new.stanford.edu"
  2. cd to /mnt/glusterfs/<your username> (or wait 5mins if it doesn't exist yet)
  3. write a job script: "$EDITOR test_job.script"
    1. see 'man qsub' for more info
    2. use env var $TMPDIR for local scratch
    3. use /mnt/glusterfs/<your username> for shared data directory
  4. submit the job for processing: "qsub -cwd test_job.script"
  5. monitor the jobs with "qstat -f -j JOBID"
    1. see 'man qstat' for more info
  6. check the output files that you specified in your job script (the input and output files must be in /mnt/glusterfs/)

Technical details: 19 new machines, 24 cores each, 96GB RAM 1 new machine, 24 cores, 192GB RAM ~450GB local scratch on each ~3TB in /mnt/glusterfs Grid Engine v6.2u5 (via standard Debian package) 10GbE interconnect (Juniper QFX3500 switch)

Initial issues: Kerberos and AFS don't work on the execution hosts You are limited in space to your AFS homedir ($HOME) and local scratch disk on each node ($TMPDIR) The execution hosts don't accept interactive jobs, only batch jobs for now.

Any questions, please email 'barley-alpha@lists.stanford.edu'

We plan to have "alpha" testing for a month or so, then rebuild the storage nodes using the information we learned, and also rebuild the execution hosts to Ubuntu 11.10. Then we'll have "beta testing" with more users in Nov and Dec and roll out to the full Stanford community on Jan 1.

barley software

The barley machines are running Ubuntu 11.04, and the software is from the Ubuntu repositories

Licensed software:

  • /usr/sweet/bin - MATLAB, SAS, stata, etc

Monitoring / Status

Current status of farmshare machines: http://barley-monitor.stanford.edu/ganglia/ More detailed graphs: http://barley-monitor.stanford.edu/munin/

Mailing Lists

We have several mailing lists, all @lists.stanford.edu, most are not used.

  • barley-alpha - temp list till end of Oct/Nov 2011, for discussion around testing the barleys
  • farmshare-announce - announce list (new service name)
  • farmshare-discuss - users discussion (new service name)
  • stanford-timeshare-users - users discussion list for the corn users, list to be retired




Getting started with MediaWiki

Consult the User's Guide for information on using the wiki software.

Personal tools
Toolbox
LANGUAGES