User Tools

Site Tools


howto:openfoam

OpenFOAM at CHPC

Contents:

For a community support forum on OpenFOAM at CHPC, please go to www.opensim.co.za/index.php/forum

Running OpenFOAM

This section describes how to run OpenFOAM on the clusters at CHPC. Versions 2.2.2, 2.3.0, 2.4.0, 3.0.1, 4.0, v1606+, v1612+, v1706 and v1712 and foam-extend-3.1 are installed in /apps/chpc/compmech/CFD/OpenFOAM. Source the required environment from one of the files OF222, OF240, OF301, OF40, OF1606+, OF1612+, OF1706 or OF1712 respectively, as illustrated in the example job scripts below. Also source the file from your .bashrc file, to ensure that the compute nodes have the right environment. It is assumed that you are familiar with running OpenFOAM solvers in parallel, and have already set up the case directory. The Gmsh and cfMesh meshing utilities are also installed and are added to the executable path when any of the OFxxx files are sourced. OpenFOAM-extend-3.1 has been installed on an experimental basis. To enable it, source the file /apps/chpc/compmech/CFD/OpenFOAM/OFxt31 .

Unfortunately, the OpenFOAM development community has become rather fragmented, and the code base has been forked into several variants. This has made it impractical for CHPC to continuously keep up with installing the latest versions of all the variants. However, if you need a particular version or variant, please contact us through the helpdesk and we will take the necessary action.

Step 1

Copy the case directory you want to run into your scratch folder (/home/<username>/scratch5). The job will fail unless it is run from somewhere on the scratch drive.

Step 2

PBSPro is used as a scheduler. Users are requested to study the example scripts carefully. In addition, users will be required to include the project ID (dummy value projectid in the example below) in the submit script Create a job file for the PBSPro scheduler containing the following text in your case directory (named, for example, runFoam):

runFoam.qsub
#!/bin/bash 
### The method of requesting and distributing the nodes has changed.  This 72-way example calls
### for 3 (virtual) nodes, each with 24 processor cores, running 12 MPI processes.
### Please note that it is necessary to specify both ncpus and mpiprocs, and 
### for OpenFOAM these should be identical to each other.
### For your own benefit, try to estimate a realistic walltime request.  Over-estimating the 
### wallclock requirement interferes with efficient scheduling, will delay the launch of the job,
### and ties up more of your CPU-time allocation untill the job has finished.
#PBS -P projectid
#PBS -l select=3:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -q normal
#PBS -l walltime=01:00:00
#PBS -o /home/username/scratch/foamJobs/job01/stdout
#PBS -e /home/username/scratch/foamJobs/job01/stderr
#PBS -m abe
#PBS -M username@email.co.za
### Source the openFOAM environment: Also place the following line in your .bashrc file
. /apps/chpc/compmech/CFD/OpenFOAM/OF1712
##### Running commands
# Set this environment variable explicitly.
export PBS_JOBDIR=/home/username/scratch/foamJobs/job01
# Explicitly change to the job directory
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
exe=simpleFoam
#### These next statements build an appropriate decomposeParDict file 
#### based on the requested nunber of nodes
echo "FoamFile" > system/decomposeParDict
echo "{"  >> system/decomposeParDict
echo "  version             2.0;" >> system/decomposeParDict
echo "  format            ascii;" >> system/decomposeParDict
echo "  class        dictionary;" >> system/decomposeParDict
echo "  object decomposeParDict;" >> system/decomposeParDict
echo "}"  >> system/decomposeParDict
echo "numberOfSubdomains " $nproc ";" >> system/decomposeParDict
echo "method scotch;" >> system/decomposeParDict
#### End of decomposeParDict file 
decomposePar -force > decompose.out
mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel > foam.out
reconstructPar -latestTime > reconstruct.out
rm -rf processor*

Notes:

  • You may wish to add further options for PBS, but these are the minimal ones that are required.
  • The number of parallel processes is selected by means of specifying the number of virtual nodes, each with ncpus cores and the same number of mpiprocs. So, in the example above, the case must be decomposed into 72 subdomains via the numberOfSubdomains setting in decomposeParDict. Decomposition method scotch is recommended.
  • walltime should be a small over-estimate of the time for the job to run, in hours:minutes:seconds.
  • exe should be set to the OpenFOAM executable that you wish to run.
  • Screen output for the run will go to the filenames following the #PBS -o and #PBS -e directives. These are buffered and only written after job completion. If you would like to monitor the run as it progresses, use the normal > output redirection to send output to file explicitly, as in the example script.
  • The line . /apps/chpc/compmech/CFD/OpenFOAM/OF301 sources a script file that loads the necessary modules and sets other environment variables for running OpenFOAM-3.0.1. If you want to run OpenFOAM-2.4.0, change it to . /apps/chpc/compmech/CFD/OpenFOAM/OF240.
  • If you need to submit a large number of small jobs, when doing a parametric study, for example, please use Job Arrays. Refer to the PBS-Pro guide at http://wiki.chpc.ac.za/quick:pbspro for guidance on how to set this up.

Step 3

Submit the job script with the command

qsub runFoam

  • Useful commands for monitoring are qstat or qstat -u <username> (only shows your jobs).
  • When the job is running you can monitor the output in the log file.
  • If the log file is empty and the job has finished running, check for errors in the output files (in the example above, stdout and stderr.

Troubleshooting

  • Are you running from the scratch directory?
  • No extra modules or include directories should be necessary to run OpenFOAM. Try commenting out unnecessary lines in your startup files.
  • Have you tested your input parameters on a small single process version of your model?

Parallel scaling

This graph should give you an indication of how well OpenFOAM scales on the cluster. Very good scaling has been achieved up to at least 1000 cores for a 60 million cell model. This ties in quite well with a general rule of thumb that parallelising down to around 50 000 cells per core is a good starting point.

However, using all 24 cores per node is not necessarily beneficial. These two graphs indicate that using 16, 20 or 24 cores per node makes little difference to the performance per node.

Compiling your own OpenFOAM code

This page describes how to set up the environment for compiling your own OpenFOAM solvers and libraries on CHPC. It assumes you are familiar with running OpenFOAM on CHPC as detailed above, and with compiling OpenFOAM code on your own machine.

First make sure the OpenFOAM environment is set up. Look at /apps/chpc/compmech/CFD/OpenFOAM/OF301. Copy one of these files to your user directory and edit the path to the appropriate OpenFOAM-*.*.*/etc/bashrc file. Edit that file to suit your installation. This step loads the module for GCC, sets up the OpenFOAM environment and puts the appropriate gmp, mpc, mpfr and libiconv versions that have been compiled and installed in /apps/chpc/compmech/CFD/OpenFOAM in your paths. Be warned that OpenFOAM-2.4.0 has been compiled with dedicated application installs of gcc, mpc, mpfr, gmp, cgal, libboost and openmpi. YMMV.

Now you can use wmake to compile your code in any directory, and run it as described above.

NB: OpenFOAM builds typically take several hours so, you should not run it on the login node (and the build may be killed if you do). The build should instead be submitted as a normal cluster job, or use qsub -qsmp -I to get an interactive session on a compute node.

/var/www/wiki/data/pages/howto/openfoam.txt · Last modified: 2018/06/12 10:29 by ccrosby