Difference between revisions of "GWU cluster (Corcoran Hall)"
Line 1: | Line 1: | ||
+ | === Hardware === | ||
+ | |||
Some information about the old cluster can be found at [http://eagle.phys.gwu.edu/~fxlee/cluster/ here]. | Some information about the old cluster can be found at [http://eagle.phys.gwu.edu/~fxlee/cluster/ here]. | ||
Line 5: | Line 7: | ||
The interconnects are still 4x DDR infiniband which provides 5Gb/s x 4lanes = 20Gb/s (signaling) which due to 8/10 encoding translates to a 16Gb/s=2GB/s in each direction (for more Infinband details consult [http://en.wikipedia.org/wiki/Infiniband wikipedia page]). | The interconnects are still 4x DDR infiniband which provides 5Gb/s x 4lanes = 20Gb/s (signaling) which due to 8/10 encoding translates to a 16Gb/s=2GB/s in each direction (for more Infinband details consult [http://en.wikipedia.org/wiki/Infiniband wikipedia page]). | ||
− | The scheduling system on the cluster is Sun Grid Engine (sge). For infiniband we have installed OFED and openmpi (v1.5) and mvapich2 (v1.5.1). We prefer to use openmpi since it provides tight integration with the scheduler (the mpi processes get killed when you delete a job from the scheduler). A simple script is given below: | + | === Configuration === |
+ | |||
+ | If you plan to use openmpi or mvapich2 to run your parallel jobs, don't forget to load the proper modules in your .cshrc file (for example for openmpi you need to have '''module load openmpi/gnu'''). Furthermore, if you plan to use cuda v3.1 load module cuda31 (we also have cuda v3.2 installed but our codes are not yet compatible with it). | ||
+ | |||
+ | The scheduling system on the cluster is Sun Grid Engine v6.2u5 ([http://gridengine.sunsource.net/ sge]). For infiniband we have installed OFED and openmpi (v1.5) and mvapich2 (v1.5.1). We prefer to use openmpi since it provides tight integration with the scheduler (the mpi processes get killed when you delete a job from the scheduler). A simple script is given below: | ||
#$ -S /bin/csh | #$ -S /bin/csh | ||
Line 33: | Line 39: | ||
qmod -e \*@gpu05 | qmod -e \*@gpu05 | ||
− | ==== Setting up | + | ==== Setting up parallel environments ==== |
− | To add/remove/modify a pe use qconf. For example '''qconf -spl''' shows you all available pe's and '''qconf -sp openmpi''' shows you the configuration of openmpi pe. To modify it, use '''qconf -mp openmpi'''. The important parameters to configure are '''start_proc_args''' and '''end_proc_args''' which allow you to do a pre/post execution setup and '''allocation_rule''' which can be either '''$round_robin''' or '''$fill_up'''. | + | To add/remove/modify a parallel environment (pe) use qconf. For example '''qconf -spl''' shows you all available pe's and '''qconf -sp openmpi''' shows you the configuration of openmpi pe. To modify it, use '''qconf -mp openmpi'''. The important parameters to configure are '''start_proc_args''' and '''end_proc_args''' which allow you to do a pre/post execution setup and '''allocation_rule''' which can be either '''$round_robin''' or '''$fill_up'''. |
Revision as of 16:18, 17 October 2010
Contents
Hardware
Some information about the old cluster can be found at here.
The cluster has been updated: it is now a cluster with 16 nodes each with 2 GPUs for a total of 32 GPUs (gtx480s). These cards have 1.5 GB of memory and deliver for dslash about 50GFlops/s in double precision and 125 GFlops/s in single precision (see Ben's Performance benchmarks).
The interconnects are still 4x DDR infiniband which provides 5Gb/s x 4lanes = 20Gb/s (signaling) which due to 8/10 encoding translates to a 16Gb/s=2GB/s in each direction (for more Infinband details consult wikipedia page).
Configuration
If you plan to use openmpi or mvapich2 to run your parallel jobs, don't forget to load the proper modules in your .cshrc file (for example for openmpi you need to have module load openmpi/gnu). Furthermore, if you plan to use cuda v3.1 load module cuda31 (we also have cuda v3.2 installed but our codes are not yet compatible with it).
The scheduling system on the cluster is Sun Grid Engine v6.2u5 (sge). For infiniband we have installed OFED and openmpi (v1.5) and mvapich2 (v1.5.1). We prefer to use openmpi since it provides tight integration with the scheduler (the mpi processes get killed when you delete a job from the scheduler). A simple script is given below:
#$ -S /bin/csh #$ -cwd #$ -j y #$ -o job_output.log$JOB_ID ##$ -q all.q@@node #$ -q all.q@@gpu #$ -l gpu_count=1 #$ -pe openmpi 8 #$ -l h_rt = 01:30:00 mpirun -n $NSLOTS --mca btl_openib_flags 1 test_dslash_multi_gpu < check.in >& output_np${NSLOTS}.log
Note that if your job consumes gpus you should use -l gpu_count=1 (one gpu per process) and you should also include --mca btl_openib_flags 1 flag to make sure that openmpi and cuda work together. If you only use the cpu codes, the flag is not necessary. Note that the new nodes are in the all.q queue and you should use all.q@@gpu if you want to use them or all.q@@node if you want to use the old nodes.
Queue admin
Disable/enable nodes
If you want to reboot a node while there are pending jobs in the queue, you should disable it from the queue first:
qmod -d \*@gpu05
disables node gpu05 from all the queues. Note that we need to escape * so that the shell doesn't expand it. Once the node is rebooted, we need to reenable the node using
qmod -e \*@gpu05
Setting up parallel environments
To add/remove/modify a parallel environment (pe) use qconf. For example qconf -spl shows you all available pe's and qconf -sp openmpi shows you the configuration of openmpi pe. To modify it, use qconf -mp openmpi. The important parameters to configure are start_proc_args and end_proc_args which allow you to do a pre/post execution setup and allocation_rule which can be either $round_robin or $fill_up.