This is a crude solution that we implemented while we were investigating a hardware solution. This solution does not measure load in each servers when it assigns users to machine. It merely reads a file called servers.fom and decide how users should be distributed among the servers. Before assigning a user to a server, it checks three conditions on that server: if tomcat is running, if the DB connection is available, if AFS is up (at Stanford, all our course directories are located in AFS). If these three conditions are met, the loadbalanacer determined that the server is in heavy state and can be assigned to. Otherwise it will remove the server from the server list, then check the 3 conditions in the next servers. If all the servers fail, an error meesage will be displayed.
Once a user has been assigned, his http requests will be handled by the assigned servers. In other words, its user session is not shared by other servers.
Configuring your load balancer, (our loadbalancer.properties)
There are three files that you need to create in order to use the loadbalancer: servers.fom, servers.probe.conf and probe.txt. The locations of the three files are specified in loadbalancer.properties. Place servers.fom and server.probe.conf in a location that is accessible by the web application, e.g. /usr/local/coursework/. You may duplicate the files in all the servers, however, you will need to go through all the servers if you need to change one setting. At Stanford, we place the files in the AFS system and each servers has access to the AFS system.
coursework-dev2.stanford.edu=LOCAL <-- this server is not in production coursework-x.stanford.edu=2 <-- user will be assigned to this server 2 times more often coursework-y.stanford.edu=1
ProbeResponse="OK" # returned string if all tests OK ProbeInterval=5 # in seconds (one server each time) ProbeFailTrigger=1 # how many bad probes before FOM=0 ProbeRecoverTrigger=2 # how many good ones to restore FOM ConfigReadInterval=300 # in seconds