Main Page

From FarmShare

(Difference between revisions)
Jump to: navigation, search
(barley software)
(stock software)
Line 59: Line 59:
* [[OpenMPI]]
* [[OpenMPI]]
* [[AFS]] / Kerberos
* [[AFS]] / Kerberos
 +
* [[GridEngine]]
=== licensed software ===
=== licensed software ===

Revision as of 13:35, 1 November 2011

This wiki is intended for the users of the Stanford shared research computing resources. E.g. the "cardinal" and "corn" and "barley" machines.

Contents

How to connect

The machines are available for anyone with a SUNetID. Simply "ssh corn.stanford.edu" with your SUNetID credentials. The DNS name "corn.stanford.edu" actually goes to a load balancer and it will connect you to a particular corn machine that has relatively low load.

The "barley" machines are only accessible via a resource manager (currently Open GridEngine). You'll need to ssh to corn-image-new and a directory will be created for you on local (shared among barley) storage. E-mail the barley-alpha mailing list for more info.

corn info


barley info

To start using the new machines, you can check out the man page for 'sge_intro' or the 'qhost', 'qstat', 'qsub' and 'qdel' commands on machine 'corn-image-new'.

So the procedure would look something like this:

  1. log into corn-image-new: "ssh sunetid@corn-image-new.stanford.edu"
  2. cd to /mnt/glusterfs/<your username> (or wait 5mins if it doesn't exist yet)
  3. write a job script: "$EDITOR test_job.script"
    1. see 'man qsub' for more info
    2. use env var $TMPDIR for local scratch
    3. use /mnt/glusterfs/<your username> for shared data directory
  4. submit the job for processing: "qsub -cwd test_job.script"
  5. monitor the jobs with "qstat -f -j JOBID"
    1. see 'man qstat' for more info
  6. check the output files that you specified in your job script (the input and output files must be in /mnt/glusterfs/)

Technical details: 19 new machines, 24 cores each, 96GB RAM 1 new machine, 24 cores, 192GB RAM ~450GB local scratch on each ~3TB in /mnt/glusterfs Grid Engine v6.2u5 (via standard Debian package) 10GbE interconnect (Juniper QFX3500 switch)

Initial issues: Kerberos and AFS don't work on the execution hosts You are limited in space to your AFS homedir ($HOME) and local scratch disk on each node ($TMPDIR) The execution hosts don't accept interactive jobs, only batch jobs for now.

Any questions, please email 'barley-alpha@lists.stanford.edu'

We plan to have "alpha" testing for a month or so, then rebuild the storage nodes using the information we learned, and also rebuild the execution hosts to Ubuntu 11.10. Then we'll have "beta testing" with more users in Nov and Dec and roll out to the full Stanford community on Jan 1.

barley software

stock software

The barley machines are running Ubuntu 11.04, and the software is from the Ubuntu repositories

licensed software

  • /usr/sweet/bin - MATLAB, SAS, stata, etc

local licensed software

We have a trial version of MATLAB Distributed Compute Server until 2011-12-25. It is installed in /mnt/glusterfs/apps/MATLAB/R2011b

You will need to write your MATLAB code to use the Parallel Computing Toolbox before using the Distributed Computing Server

more info: http://www.mathworks.com/support/product/DM/installation/ver_current/

Monitoring / Status

Current status of farmshare machines: http://barley-monitor.stanford.edu/ganglia/ More detailed graphs: http://barley-monitor.stanford.edu/munin/

Mailing Lists

We have several mailing lists, all @lists.stanford.edu, most are not used.

  • barley-alpha - temp list till end of Oct/Nov 2011, for discussion around testing the barleys
  • farmshare-announce - announce list (new service name)
  • farmshare-discuss - users discussion (new service name)
  • stanford-timeshare-users - users discussion list for the corn users, list to be retired




Getting started with MediaWiki

Consult the User's Guide for information on using the wiki software.

Personal tools
Toolbox
LANGUAGES