#!/bin/bash ### ### ### The same workflow should work for 23.2 and 23.3, just substitute 23.2 with 23.3 if ### you need the more recent version. ### ### ### Request a single node for 4 MPI processes and 2 OpenMP threads per MPI process #PBS -l select=1:ncpus=8:mpiprocs=4 #PBS -P MECH1234 #PBS -l walltime=02:00:00 #PBS -q serial #PBS -o /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03/mfix.out #PBS -e /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03/mfix.err ### Change directory to a typical test case that comes with MFix. Obviously use your own directory. cd /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03 ### Prepare the Miniconda environment source /home/apps/chpc/compmech/MFix/miniconda3/etc/profile.d/conda.sh ### Get the following into your path: ### 1. The standard MFix installation ### 2. gcc-8.3.0 (not used for 23.3) ### 3. mpich-3.3 (not used for 23.3) module load chpc/compmech/mfix/23.2 ### Activate the mfix conda environment conda activate mfix-23.2 ### Build your custom solver for both OpenMP and MPI. Use 4 threads for the compile. build_mfixsolver --batch --smp --dmp -j 4 ### We no longer need the mfix conda environment. Deactivate it. conda deactivate ### Set up 2 OpenMP threads per MPI process export OMP_NUM_THREADS=2 ### The MFix test case comes with a script to run the case. This script contains the mpirun command which ### can be edited. In this particular case it had te be edited to remove two options pertaining to ### running as root and oversubscribing cores. The mpirun command looks like this: ### mpirun -np 4 ./mfixsolver -f mfix.dat nodesi=2 nodesj=2 ### and can also be used below instead of the the script. ./runtests.sh | tee mfixsolver.out