This is an old revision of the document!
Contents:
For a community support forum on OpenFOAM at CHPC, please go to www.opensim.co.za/index.php/forum
This section describes how to run OpenFOAM on the clusters at CHPC. Versions 2.2.2, 2.3.0, 2.4.0, 3.0.1, 4.0, 5.0, 6.0, v1606+, v1612+, v1706, v1712 and v1806 and foam-extend-3.1 are installed in /apps/chpc/compmech/CFD/OpenFOAM
. Source the required environment from one of the files OF222, OF240, OF301, OF40, OF50, OF60, OF1606+, OF1612+, OF1706, OF1712 or OF1806 respectively, as illustrated in the example job scripts below. Also source the file from your .bashrc file, to ensure that the compute nodes have the right environment. Some MPI codes do not need this, as ssh is only used (together with an absolute path) to start the MPI ranks, after which all communication is via MPI. For OpenFOAM, however, it is essential for each compute node set up the appropriate environment, and this is best done by way of the script sourced from .bashrc It is assumed that you are familiar with running OpenFOAM solvers in parallel, and have already set up the case directory. The Gmsh and cfMesh meshing utilities are also installed and are added to the executable path when any of the OFxxx files are sourced. OpenFOAM-extend-3.1 has been installed on an experimental basis. To enable it, source the file /apps/chpc/compmech/CFD/OpenFOAM/OFxt31 .
As an experiment, we have added a version of OpenFOAM-5.0 that uses MPICH-3.2 rather than OpenMPI. We have experienced better performance and stability with MPICH than with OpenMPI on some other codes, therefore it was regarded as worthwhile to also try this with OpenFOAM. The source file for this version is OF50mpich , and the mpirun command in the script should be modified to read mpirun -iface ib0
, to force mpich to use the Infiniband network rather than Ethernet. In practice, on Langau this seems to make little difference, as ethernet traffic between compute nodes is via ethernet over infiniband anyway!
Unfortunately, the OpenFOAM development community has become rather fragmented, and the code base has been forked into several variants. This has made it impractical for CHPC to continuously keep up with installing the latest versions of all the variants. However, if you need a particular version or variant, please contact us through the helpdesk and we will take the necessary action.
Copy the case directory you want to run into your lustre folder (/mnt/lustre/<username>
). The job will fail unless it is run from somewhere on the lustre drive.
PBSPro is used as a scheduler. Users are requested to study the example scripts carefully. In addition, users will be required to include the project ID (dummy value projectid in the example below) in the submit script Create a job file for the PBSPro scheduler containing the following text in your case directory (named, for example, runFoam
):
#!/bin/bash ### The method of requesting and distributing the nodes has changed. This 72-way example calls ### for 3 (virtual) nodes, each with 24 processor cores, running 12 MPI processes. ### Please note that it is necessary to specify both ncpus and mpiprocs, and ### for OpenFOAM these should be identical to each other. ### For your own benefit, try to estimate a realistic walltime request. Over-estimating the ### wallclock requirement interferes with efficient scheduling, will delay the launch of the job, ### and ties up more of your CPU-time allocation untill the job has finished. #PBS -P projectid #PBS -l select=3:ncpus=24:mpiprocs=24:nodetype=haswell_reg #PBS -q normal #PBS -l walltime=01:00:00 #PBS -o /home/username/scratch/foamJobs/job01/stdout #PBS -e /home/username/scratch/foamJobs/job01/stderr #PBS -m abe #PBS -M username@email.co.za ### Source the openFOAM environment: Also place the following line in your .bashrc file ### Actually, this line should really be omitted, because it HAS to be in the .bashrc file anyway . /apps/chpc/compmech/CFD/OpenFOAM/OF1806 ##### Running commands # Set this environment variable explicitly. export PBS_JOBDIR=/home/username/scratch/foamJobs/job01 # Explicitly change to the job directory cd $PBS_JOBDIR nproc=`cat $PBS_NODEFILE | wc -l` exe=simpleFoam #### These next statements build an appropriate decomposeParDict file #### based on the requested nunber of nodes echo "FoamFile" > system/decomposeParDict echo "{" >> system/decomposeParDict echo " version 2.0;" >> system/decomposeParDict echo " format ascii;" >> system/decomposeParDict echo " class dictionary;" >> system/decomposeParDict echo " object decomposeParDict;" >> system/decomposeParDict echo "}" >> system/decomposeParDict echo "numberOfSubdomains " $nproc ";" >> system/decomposeParDict echo "method scotch;" >> system/decomposeParDict #### End of decomposeParDict file decomposePar -force > decompose.out mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel > foam.out reconstructPar -latestTime > reconstruct.out rm -rf processor*
Notes:
virtual nodes
, each with ncpus
cores and the same number of mpiprocs
. So, in the example above, the case must be decomposed into 72 subdomains via the numberOfSubdomains
setting in decomposeParDict
. Decomposition method scotch
is recommended.walltime
should be a small over-estimate of the time for the job to run, in hours:minutes:seconds.exe
should be set to the OpenFOAM executable that you wish to run.#PBS -o
and #PBS -e
directives. These are buffered and only written after job completion. If you would like to monitor the run as it progresses, use the normal >
output redirection to send output to file explicitly, as in the example script.. /apps/chpc/compmech/CFD/OpenFOAM/OF301
sources a script file that loads the necessary modules and sets other environment variables for running OpenFOAM-3.0.1. If you want to run OpenFOAM-2.4.0, change it to . /apps/chpc/compmech/CFD/OpenFOAM/OF240
.Submit the job script with the command
qsub runFoam
qstat
or qstat -u <username>
(only shows your jobs).log
file.stdout
and stderr
. This graph should give you an indication of how well OpenFOAM scales on the cluster. Very good scaling has been achieved up to at least 1000 cores for a 60 million cell model. This ties in quite well with a general rule of thumb that parallelising down to around 50 000 cells per core is a good starting point.
However, using all 24 cores per node is not necessarily beneficial. These two graphs indicate that using 16, 20 or 24 cores per node makes little difference to the performance per node.
This page describes how to set up the environment for compiling your own OpenFOAM solvers and libraries on CHPC. It assumes you are familiar with running OpenFOAM on CHPC as detailed above, and with compiling OpenFOAM code on your own machine.
First make sure the OpenFOAM environment is set up. Look at /apps/chpc/compmech/CFD/OpenFOAM/OF301
. Copy one of these files to your user directory and edit the path to the appropriate OpenFOAM-*.*.*/etc/bashrc file. Edit that file to suit your installation. This step loads the module for GCC, sets up the OpenFOAM environment and puts the appropriate gmp, mpc, mpfr and libiconv versions that have been compiled and installed in /apps/chpc/compmech/CFD/OpenFOAM
in your paths. Be warned that OpenFOAM-2.4.0 has been compiled with dedicated application installs of gcc, mpc, mpfr, gmp, cgal, libboost and openmpi. YMMV.
Now you can use wmake
to compile your code in any directory, and run it as described above.
NB: OpenFOAM builds typically take several hours so, you should not run it on the login node (and the build may be killed if you do). The build should instead be submitted as a normal cluster job, or use qsub -qsmp -I
to get an interactive session on a compute node.