This wiki is intended for the users of FarmShare, one of the Stanford shared research computing environments. It comprises the "cardinal", "corn", and "barley" machines. For a general description of this service, and Stanford's shared computing policies, see the main service catalog page. Note that the Corn and Barley systems are largely intended for the use of students for course assignments and small-scale thesis work, and for new researchers as a place for proof-of-concept runs and work until external funding is obtained for infrastructure or services to support more demanding work.
Please note that this system is not HIPAA compliant and should not be used to process any PHI nor PII, nor should it be used as a platform for storing or processing other Restricted or Prohibited data. See https://acp.stanford.edu/hipaa/hipaa-faq for more information.
Most useful pages:
|Last 10 messages on Farmshare-Discuss mail list (this month)|
System is currently fully operational. Last outage was Sun May 10 3-9PM.
How to connect
The machines are available for anyone with a SUNetID. Simply "ssh corn.stanford.edu" with your SUNetID credentials. The DNS name "corn.stanford.edu" actually goes to a load balancer and it will connect you to a particular corn machine (e.g. corn21) that has relatively low load.
corn SSH fingerprint is:
RSA key fingerprint is 0b:e7:b4:95:03:c1:1e:07:df:04:ca:a2:3d:8e:e3:37.
The "barley" machines are designed to be used for high performance computing (HPC) and only accessible via a resource manager (currently Open Grid Scheduler). You cannot log in directly, but you can submit jobs from any corn. Storage dedicated for jobs running on the barley cluster is available via /farmshare/user_data/ on all corn and barley nodes; it is considered scratch storage and is not backed up. Sign up and email the farmshare-discuss mailing list if you have any questions or would like any info not listed here.
Since Farmshare is open to the world, for extra security we use standard central Stanford two-factor authentication which uses the software from Duo Two-Factor(https://www.duosecurity.com/). You will still need to have a valid Kerberos credential or password in addition to the second factor to authenticate. Once your Kerberos ticket or password is accepted, you will see a second authentication prompt from Duo. It will list device options regarding how to authenticate and you can either choose one, or just enter a passcode generated via the Duo mobile app at the prompt. After a successful second factor authentication you will be greeted with the FarmShare message of the day (motd).
In order to avoid having to 'Duo' every time you want to open a new terminal to corn we have a work around. Add the following lines to your ~/.ssh/config file on your local machine (not on FarmShare) to enable ControlMaster which will create a tunnel on your first login, and will re-use the same tunnel on subsequent connections, thus avoiding Duo after the initial connection. This will only work if you are logging into the same node to which the tunnel was established. Once you are logged onto a corn node, hopping(ssh) to any other corn will not require Duo again*.
Host corn corn?? corn.stanford.edu corn??.stanford.edu ControlMaster auto ControlPath ~/.ssh/%r@%h:%p ControlPersist yes
- If you use the 'PreferredAuthentications' options in your .ssh/config, you will need to add "gssapi-with-mic,hostbased" to the list of PreferredAuthentications OR use "ssh -oPreferredAuthentications=gssapi-with-mic,hostbased cornXX" from the command line to override the .ssh/config file.
SSH Tunnel Sharing on Windows
This 'ControlMaster' behavior can be accomplished with SecureCRT 'Clone Session' option after opening the intial connection. It is also possible with the latest Putty development snapshot. Simply enable the 'Share SSH connection if possible' SSH option, and subsequent connections will share the ssh tunnel.
How to get help
- You can e-mail email@example.com
- If you're e-mailing about a barley job, please mention that it's on barley and the job number.
SMACC office hours
- You can come to office hours. Summer 2015 - by appointment. Spring 2015 was at Huang basement in front of ICME door:
- Have a computational or statistical problem that you need help with? Or maybe you have an account on Farmshare or Proclus, and so now what? You have a boatload of data to make sense of, but how? Wonder where you can do your research project, and who can help you? You know what you want to do – but how best to do it, you just aren’t sure. Help is here in the form of SMACC – Stat, Math, Algorithmic and Computational Consulting! Technical consultants from ICME, Research Computing, Statistics and IRiSS will be available to work with youin the basement of Huang (in front of ICME). One stop shopping for your scientific computing needs. Rather than poke around web sites and send mail to multiple groups, drop by to catch all of us at once.
- Details are at SMACC
The "cardinal" machines are small VMs intended for long-running processes (on the order of days) that are not resource intensive, e.g. mail/chat clients. You could log in to a cardinal and run a screen/tmux session there to do things on other machines.
Simply "ssh cardinal.stanford.edu" with your SUNetID credentials.
There are currently 3 cardinal machines: cardinal1, cardinal2 and cardinal3, load-balanced via cardinal.stanford.edu.
Things you can do on cardinal:
- text based email clients
- transfer files to and from AFS and your desktop
- access command line utilities
The "corn" machines are general-purpose Ubuntu boxes for research and education, and you can run whatever you want on them (so long as you don't negatively impact other users and the work you are doing is directly connected to the university's academic or research mission). Please read the policies and the motd first.
Simply "ssh corn.stanford.edu" with your SUNetID credentials.
There are 10 newer corn machines (128G RAM), 20 older corn machines (32G RAM), and 10 oldest corn machines (24G RAM).
Things you can do on corn:
- email to stanford addresses only. for general text based email please use the cardinal
- access command line utilities
- access desktop environment via VNC
- access developer toolchains (c/c++/java/python/go/julia/etc)
- run licensed software (mathematica/stata/matlab/gaussian/sas/etc) as described here
- submit jobs to the Barley cluster
- basically anything you would do on a desktop Ubuntu system
- Policies: http://itservices.stanford.edu/service/sharedcomputing/policies
- IT services page: https://itservices.stanford.edu/service/sharedcomputing
- Q? E-mail firstname.lastname@example.org or file a HelpSU: http://helpsu.stanford.edu/?pcat=farmshare
- Future vision as of summer 2010: http://itservices.stanford.edu/strategy/sysadmin/timeshare
The "rye" machines are general purpose Ubuntu (same as corn) but have Nvidia GPU's.
Things you can do on rye:
- anything you can do on a corn
- run CUDA 6.0/OpenCL programs (text based or GUI)
- run 3D graphics programs such as PyMOL ------>
- Detailed Rye info
The "barley" machines are general-purpose newer Ubuntu boxes that can run jobs that you submit via the resource manager software. You should not log in to any barley directly, but can do so to troubleshoot your jobs.
Things you can do on barley:
- cpu intensive jobs
- large memory jobs (up to 192GB)
- lots of cpu intensive jobs
- thousands of cpu intensive jobs
- Detailed Barley info
Examples of using the barley cluster
- Introductory examples:
- Access Mysql from Matlab
- MPI Abinit
- Gaussview: Automated Submission Script Creation & Submission
questions or requests about installed software
If you have a question or request about the installed software on FarmShare please email us: email@example.com
questions about how to use installed software
If you need help on usage we would suggest:
- FarmShare mail list. FarmShare maillist has hundreds of other FarmShare users. It is quite likely somebody will be able to help. Please e-mail the FarmShare user community
- SMACC. SMACC office hours is an excellent place to start if you would like to discuss or ask a question in-person. see SMACC
The FarmShare machines are running Ubuntu 14.04, and the software is from the Ubuntu repositories, e.g. run dpkg -l | grep ^i to see the list of installed packages.
In addition to Ubuntu packages, the following packages are installed and are available via the "module" command:
See FarmShare_software for detailed examples.
$ module avail ---------------------------------- /farmshare/software/free/lmod-5.0-install/lmod/lmod/modulefiles/Core ----------------------------------- lmod/lmod settarg/settarg ------------------------------------------------------ /farmshare/software/mf/raring ------------------------------------------------------ abinit/7.4.2 cudasamples/5.5 julia/git11262013 povray/3.6.1 statamp/12.1 acml/5.3.1 farmvnc/0.1 mathematica/9.0 sagemath/5.11 statase/12.1 ampl/20120629 farmvnc/0.2 (D) matlab/r2012b sas/9.2 stattransfer/12 atompaw/184.108.40.206 gams/24.1 matlab/r2013a (D) sas/9.3 cplex/12.4 gaussian/g09gview50 (D) mzmine/2.10 sas/9.4 (D) cuda/5.5 gaussian/g09sse4gview50 openmpi/1.6.5 sentarus/H_2013.03-SP1 -------------------------------------------------- /farmshare/software/mf/raring-compat --------------------------------------------------- ANSYS GAMS-24.1 MATLAB-R2013a SAS-v9.2 StataMP-12.1 CPLEX-12.4 MATLAB-R2012b Mathematica-9.0 StatTransfer-v12 StataSE-12.1 Where: (D): Default Module See https://www.stanford.edu/group/farmshare/cgi-bin/wiki/index.php/FarmShare_software for description of how to use modules. A few commands to get you started: To load latest version of matlab: module load matlab You can load a specific version by: module load matlab/r2012b or module load MATLAB-R2012b or to find out more information for a package: module spider matlab
Monitoring / Status
You probably want to try these commands:
qstat -g c #cluster slots summary by queue qhost -F mem_free #show available memory on each host qstat -f -u \* #show all jobs in the system
- Current status of farmshare machines: http://barley-monitor.stanford.edu/ganglia/
- More detailed graphs: http://barley-monitor.stanford.edu/munin/
For important announcements, we plan to:
- add it to this wiki
- modify /etc/motd on the corn machines
- send a mail to farmshare-discuss
We have mailing lists, @lists.stanford.edu - https://itservices.stanford.edu/service/mailinglists/tools
Want to learn HPC? Free education materials available:
FarmShare has GPUs, but if you need to scale up to a cluster ICME is an excellent resource:
Other GPU resources are:
- Engineering / Computer Science computer labs: myth20 through myth32 have nVidia GPU modules for development: http://cs.stanford.edu/computing-guide/overview/computer-systems/myth
Other similar wikis/clusters on campus (you might not have access to these):
- Engineering / Computer Science computer labs (myth.stanford.edu), open to all fully sponsored SunetIDs: http://cs.stanford.edu/computing-guide/overview/computer-systems/myth
- Stanford Research Computing cluster: http://sherlock.stanford.edu
- Genetics/bionformatics service center cluster: https://www.stanford.edu/group/scgpm/cgi-bin/informatics/wiki/index.php/Main_Page
- SU-HPC group: https://www.stanford.edu/group/su-hpc/cgi-bin/mediawiki/index.php/Special:Recentchanges
- Proclus (H&S cluster): https://www.stanford.edu/group/proclus/cgi-bin/mediawiki/index.php/Main_Page
The Farmshare resources are being made available to students, faculty and staff with fully sponsored SunetIDs to facilitate research at Stanford University. This resource is designed so that those doing research will have a place to experiment and learn about technical solutions to assist in reaching their research goals without needing to write a grant for a cluster. The Farmshare resources are focused on making it easier to learn how to parallelize research computing tasks and use research software including a "scheduler" or "distributed resource management system" to submit compute jobs.
By using Farmshare, new researchers can more easily adapt to using larger clusters when they have big projects that involve using federally funded resources, shared Stanford clusters, or on a small grant funded cluster.