User Tools

Site Tools


howto:openfoam

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
howto:openfoam [2019/09/26 13:56]
ccrosby
howto:openfoam [2020/07/08 09:12] (current)
ccrosby [Running OpenFOAM]
Line 6: Line 6:
  
 For a community support forum on OpenFOAM at CHPC, please go to www.opensim.co.za/​index.php/​forum For a community support forum on OpenFOAM at CHPC, please go to www.opensim.co.za/​index.php/​forum
 +
 +====== Warning: mpirun parameters changed ======
 +
 +It has been necessary to change some of our MPI installations. ​ The reason for this is that these installations were compiled without support for the PBS scheduler, which could result in orphaned processes left on the cluster if a job failed for any reason. ​ The revised MPI installations keep all MPI ranks under full control of PBS, which means that they should be cleaned up properly in the event of a job failure. ​ However, uers may now experience an incompatibility with the mpirun command. ​ A typical error message may look like this: 
 +
 +''​HYDT_bsci_init (tools/​bootstrap/​src/​bsci_init.c:​175):​ unrecognized RMK: user''​
 +
 +**The simple solution to this error is to remove the ''​-machinefile $PBS_NODEFILE''​ parameter to the mpirun command.**
 +
  
 ===== Running OpenFOAM ===== ===== Running OpenFOAM =====
  
-This section describes how to run OpenFOAM on the clusters at CHPC. Versions 2.4.0, 3.0.1, 4.0, 5.0, 6.0, 7.0, v1706, v1712, v1806, v1812, v1906 and foam-extend-3.1 are installed in ''/​apps/​chpc/​compmech/​CFD/​OpenFOAM''​. ​ Source the required environment from one of the files OF240, OF301, OF40, OF50, OF60, OF1706, OF1712, OF1806 ​or OF1906 respectively,​ as illustrated in the example job scripts below. ​ Please be aware that v1812 and v1906 use MPICH rather than OpenMPI, and the mpirun command ** has to ** select the use of the Infiniband network interface. ​ Also, you may need to source the file from your .bashrc file, to ensure that the compute nodes have the right environment. ​ This ** should ** not be necessary, but it is sometimes a useful workaround if you experience MPI troubles. ​ It is assumed that you are familiar with running OpenFOAM solvers in parallel, and have already set up the case directory. The Gmsh and cfMesh meshing utilities are also installed and are added to the executable path when any of the OFxxx files are sourced. OpenFOAM-extend-3.1 has been installed on an experimental basis. ​ To enable it, source the file /​apps/​chpc/​compmech/​CFD/​OpenFOAM/​OFxt31 ​.+This section describes how to run OpenFOAM on the clusters at CHPC. Versions 2.4.0, 3.0.1, 4.0, 5.0, 6.0, 7.0, v1706, v1712, v1806, v1812, v1906, v1912, v2006 and foam-extend-3.1, 3.2 and 4.0 are installed in ''/​apps/​chpc/​compmech/​CFD/​OpenFOAM''​. ​ Source the required environment from one of the files OF240, OF301, OF40, OF50, OF60, OF1706, OF1712, OF1806OF1906, OF1912 or OF2006 ​respectively,​ as illustrated in the example job scripts below. ​ Please be aware that versions after 5.0 and 18.06 use MPICH rather than OpenMPI, and the mpirun command ** has to ** select the use of the Infiniband network interface. ​ Also, you may need to source the file from your .bashrc file, to ensure that the compute nodes have the right environment. ​ This ** should ** not be necessary, but it is sometimes a useful workaround if you experience MPI troubles. ​ It is assumed that you are familiar with running OpenFOAM solvers in parallel, and have already set up the case directory. The Gmsh and cfMesh meshing utilities are also installed and are added to the executable path when any of the OFxxx files are sourced. OpenFOAM-extend-has been installed on an experimental basis. ​ To enable it, source the appropriate ​file in /​apps/​chpc/​compmech/​CFD/​OpenFOAM/​, such as OFextend32.
  
 === Using MPICH instead of OpenMPI === === Using MPICH instead of OpenMPI ===
-There is an additional version of OpenFOAM-5.0 that uses MPICH-3.2 rather than OpenMPI. ​ The v1812 and v1906 installations are only available with MPICH. ​ The source file for the OpenFOAM-5.0 version is OF50mpich , and the mpirun command in the script ** must ** be modified to read ''​mpirun -iface ib0'',​ to force mpich to use the Infiniband network rather than Ethernet.  ​+There is an additional version of OpenFOAM-5.0 that uses MPICH-3.2 rather than OpenMPI. ​ The Open-FOAM-6.0,​ OpenFOAM-7.0, ​v1812v1906, v1912 andv2006 ​installations are only available with MPICH. ​ The source file for the OpenFOAM-5.0 version is OF50mpich , and the mpirun command in the script ** must ** be modified to read ''​mpirun -iface ib0'',​ to force mpich to use the Infiniband network rather than Ethernet.  ​
  
 === OpenFOAM versions === === OpenFOAM versions ===
Line 19: Line 28:
 === Step 1 === === Step 1 ===
  
-Copy the case directory you want to run into your lustre folder (''/​mnt/​lustre/<​username>''​). The job will fail unless it is run from somewhere on the lustre drive.+Copy the case directory you want to run into your lustre folder (''/​mnt/​lustre/users/<​username>''​). The job will fail unless it is run from somewhere on the lustre drive.
  
 === Step 2 === === Step 2 ===
Line 38: Line 47:
 #PBS -q normal #PBS -q normal
 #PBS -l walltime=01:​00:​00 #PBS -l walltime=01:​00:​00
-#PBS -o /​home/​username/​scratch/​foamJobs/​job01/​stdout +#PBS -o /​home/​username/​lustre/​foamJobs/​job01/​stdout 
-#PBS -e /​home/​username/​scratch/​foamJobs/​job01/​stderr+#PBS -e /​home/​username/​lustre/​foamJobs/​job01/​stderr
 #PBS -m abe #PBS -m abe
 #PBS -M username@email.co.za #PBS -M username@email.co.za
 ### Source the openFOAM environment:​ ### Source the openFOAM environment:​
-. /​apps/​chpc/​compmech/​CFD/​OpenFOAM/​OF1906+. /​apps/​chpc/​compmech/​CFD/​OpenFOAM/​OF2006
 ##### Running commands ##### Running commands
 # Set this environment variable explicitly. # Set this environment variable explicitly.
-export PBS_JOBDIR=/​home/​username/​scratch/​foamJobs/​job01+export PBS_JOBDIR=/​home/​username/​lustre/​foamJobs/​job01
 # Explicitly change to the job directory # Explicitly change to the job directory
 cd $PBS_JOBDIR cd $PBS_JOBDIR
Line 65: Line 74:
 decomposePar -force > decompose.out decomposePar -force > decompose.out
 ## Issue the MPIRUN command. ​ Omit -iface ib0 when using OpenMPI versions ## Issue the MPIRUN command. ​ Omit -iface ib0 when using OpenMPI versions
-mpirun -iface ib0 -np $nproc ​-machinefile $PBS_NODEFILE ​$exe -parallel > foam.out+mpirun -iface ib0 -np $nproc $exe -parallel > foam.out
 reconstructPar -latestTime > reconstruct.out reconstructPar -latestTime > reconstruct.out
 rm -rf processor* rm -rf processor*
Line 110: Line 119:
 This page describes how to set up the environment for compiling your own OpenFOAM solvers and libraries on CHPC. It assumes you are familiar with running OpenFOAM on CHPC as detailed [[howto:​openfoam#​running_openfoam|above]],​ and with compiling OpenFOAM code on your own machine. This page describes how to set up the environment for compiling your own OpenFOAM solvers and libraries on CHPC. It assumes you are familiar with running OpenFOAM on CHPC as detailed [[howto:​openfoam#​running_openfoam|above]],​ and with compiling OpenFOAM code on your own machine.
  
-First make sure the OpenFOAM environment ​is set up.  ​Look at ''/​apps/​chpc/​compmech/​CFD/​OpenFOAM/OF301''​. ​ Copy one of these files to your user directory and edit the path to the appropriate OpenFOAM-*.*.*/etc/bashrc file.  Edit that file to suit your installation This step loads the module for GCCsets up the OpenFOAM ​environment and puts the appropriate gmpmpc, mpfr and libiconv versions that have been compiled ​and installed ​in ''​/​apps/​chpc/​compmech/​CFD/​OpenFOAM''​ in your paths Be warned that OpenFOAM-2.4.0 has been compiled with dedicated application installs of gccmpc, mpfr, gmp, cgal, libboost ​and openmpi YMMV.+Usually, ​the easiest approach ​is to use one of the installed OpenFOAM versions as a basis for compiling your own codeFirst choose and source the appropriate OFxx file in  ''/​apps/​chpc/​compmech/​CFD/​OpenFOAM''​. 
 +You should then set the following environment variables ​to filepaths somewhere where you have write-permission(eg ​your home directory): **$FOAM_USER_APPBIN** ​and **$FOAM_USER_LIBBIN**. For instanceusing OpenFOAM-6.0with your solvers ​and libraries in userapp-6.0 ​and userlib-6.0 sub-directories ​in an OpenFOAM directory in your home dir: 
 + 
 +<file bash customOFenv.sh>​ 
 +source ​/​apps/​chpc/​compmech/​CFD/​OpenFOAM/OF60 
 +export FOAM_USER_APPBIN=$HOME/​OpenFOAM/​userapps-6.
 +export FOAM_USER_LIBBIN=$HOME/​OpenFOAM/userlibs-6.0 
 +</​file>​ 
 + 
 +Now you can use ''​wmake''​ to compile your code, and run it as described [[howto:​openfoam#​running_openfoam|above]]You will need to source the original **OFxx** file, and then set **$FOAM_USER_APPBIN** and **FOAM_USER_LIBBIN** in your submit scripts whenever you run an OpenFOAM job using your custom code.
  
-Now you can use ''​wmake'' ​to compile ​your code in any directory, and run it as described ​[[howto:​openfoam#​running_openfoam|above]].+If you need to make more significant changes to OpenFOAM that require recompiling the whole package, you should first make a local copy of the most appropriate **OFxx** script, then edit it carefully so that all the filepaths therein point to your custom OpenFOAM installation (again, probably ​in your home directory). You can then source your altered OFxx script, cd to your custom OpenFOAM installation ​directory, and run Allwmake. If necessary, you can also set $FOAM_USER_APPBIN and $FOAM_USER_LIBBIN ​as described ​earlier (these can be set at the end of the doctored OFxx script, if desired).
  
 **NB:** OpenFOAM builds typically take several hours so, you should **not** run it on the login node (and the build may be killed if you do). The build should instead be submitted as a normal cluster job, or use ''​qsub -qsmp -I''​ to get an interactive session on a compute node. **NB:** OpenFOAM builds typically take several hours so, you should **not** run it on the login node (and the build may be killed if you do). The build should instead be submitted as a normal cluster job, or use ''​qsub -qsmp -I''​ to get an interactive session on a compute node.
  
/var/www/wiki/data/attic/howto/openfoam.1569498980.txt.gz · Last modified: 2019/09/26 13:56 by ccrosby