Instructional video: MFix at the CHPC.
MFIX is an open-source multiphase flow solver written in FORTRAN 90. It is used for simulating fluid-solid systems such as fluidized beds. As it appears to have numerous potential uses in chemical engineering and mineral processing applications, a limited level of support for the use of the package is now available at the CHPC.
Alternatively, users may download the code and install it in their home directory. A one-time free registration is required prior to downloading the source code. To register, go to the MFIX website at https://mfix.netl.doe.gov, click on “Register” at the bottom of the home page or go directly to https://mfix.netl.doe.gov/registration.php.
This wiki page details those issues specific to running MFIX on the Lengau cluster at CHPC. For a more general explanation of the use of MFIX, consult the documentation and example cases included with the source tarball.
In recent years, MFix has been substantially modernised. It can be used in two ways:
However, even when using the customized command line approach, a Python script is still used for building the code. The following instructions apply from MFix-23.2 onwards, which have been set up to facilitate both of these approaches.
The latest installation makes use of Miniforge-3. To set up a suitable environment (starting from a clean base), execute the following steps:
module load chpc/compmech/python/miniforge-3 conda init conda activate mfix-24.2.3
You will notice that mfix has been installed in the directory as shown below:
(base) [jblogs@cnode1234:~]$ conda activate mfix-24.2.3 (mfix-24.2.3) [jblogs@cnode1234:~]$ which mfix /home/apps/chpc/compmech/MFix/miniforge3/envs/mfix-24.2.3/bin/mfix (mfix-24.2.3) [jblogs@cnode1234:~]$
To exit this environment, use the command:
conda deactivate
The MFix source code and tutorials have been unpacked in the directory
/home/apps/chpc/compmech/MFix/mfix-24.2.3
A Miniconda environment for MFix-23.2 has been configured, and is installed in the directory /home/apps/chpc/compmech/MFix/miniconda3/envs/mfix-23.2
. These are the required steps:
This command can be issued from the command line or be placed in your $HOME/.bashrc
file for interactive work. When running in non-interactive batch mode through PBS, it should be placed in your PBS script.
source /home/apps/chpc/compmech/MFix/miniconda3/etc/profile.d/conda.sh
Like the previous instruction, this can be given from the command line or placed in your $HOME/.bashrc
file for interactive work, or placed in the PBS job script.
module load chpc/compmech/mfix/23.2
This step is necessary if you want to use the GUI-based process or if you want to compile a custom solver. It is not necessary if you simply want to run an already-compiled custom solver.
conda activate mfix-23.2
The standard mfixsolver
is already in your path. To build a custom solver that incorporates user coding and/or parallelisation, use the provided build_mfixsolver
procedure with the appropriate options. This example provides both smp (OpenMP) and dmp (MPI) parallelisation and is compiled using 4 threads.
build_mfixsolver --batch --smp --dmp -j 4
If you are going to run a custom solver, first remove old out files, then set your required number of OpenMP threads, if appropriate, and run. This example is for smp only.
export OMP_NUM_THREADS=4 ./mfixsolver
#!/bin/bash ### ### ### The same workflow should work for 23.2 and 23.3, just substitute 23.2 with 23.3 if ### you need the more recent version. ### ### ### Request a single node for 4 MPI processes and 2 OpenMP threads per MPI process #PBS -l select=1:ncpus=8:mpiprocs=4 #PBS -P MECH1234 #PBS -l walltime=02:00:00 #PBS -q serial #PBS -o /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03/mfix.out #PBS -e /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03/mfix.err ### Change directory to a typical test case that comes with MFix. Obviously use your own directory. cd /home/jblogs/lustre/mfix-23.2/tests/fluid/FLD03 ### Prepare the Miniconda environment source /home/apps/chpc/compmech/MFix/miniconda3/etc/profile.d/conda.sh ### Get the following into your path: ### 1. The standard MFix installation ### 2. gcc-8.3.0 (not used for 23.3) ### 3. mpich-3.3 (not used for 23.3) module load chpc/compmech/mfix/23.2 ### Activate the mfix conda environment conda activate mfix-23.2 ### Build your custom solver for both OpenMP and MPI. Use 4 threads for the compile. build_mfixsolver --batch --smp --dmp -j 4 ### We no longer need the mfix conda environment. Deactivate it. conda deactivate ### Set up 2 OpenMP threads per MPI process export OMP_NUM_THREADS=2 ### The MFix test case comes with a script to run the case. This script contains the mpirun command which ### can be edited. In this particular case it had te be edited to remove two options pertaining to ### running as root and oversubscribing cores. The mpirun command looks like this: ### mpirun -np 4 ./mfixsolver -f mfix.dat nodesi=2 nodesj=2 ### and can also be used below instead of the the script. ./runtests.sh | tee mfixsolver.out
With the environment set up by these instructions:
source /home/apps/chpc/compmech/MFix/miniconda3/etc/profile.d/conda.sh module load chpc/compmech/mfix/23.2 conda activate mfix-23.2
it is also possible to run the interactive version of MFix with its Python-based GUI. However, there are two main problems:
These two problems are easy to overcome. Use VNC to work with a virtual desktop and parallel Mesa software rendering to provide OpenGL graphics. Please read this page: https://wiki.chpc.ac.za/howto:remote_viz
MFIX generates a new executable for each case, which is copied in to the case directory. First cd to the case directory. Next, add modules for your choice of gcc version and the corresponding version of MPI, for instance:
module add gcc/5.1.0 chpc/openmpi/gcc/65/1.8.8-gcc5.1.0
Finally, the mfix executable is built and copied to the current case directory by the following command:
sh /apps/chpc/compmech/CFD/MFIX/mfix/model/make_mfix
if using the system mfix, or
sh <mfix_install_dir>/model/make_mfix
for other install locations.
This script prompts the user to specify a number of compile-time options, in particular the compiler used, the desired level of optimisation and the type of parallelisation used. The default compiler is gfortran. The Intel fortran compiler may also be used, but this has not yet been tested on the Sun cluster. The parallelisation options are serial, parallel shared-memory, parallel distributed memory, or parallel hybrid (shared and distributed).
At present, parallel distributed memory appears to work best; reasonable scaling seems to be obtained. Shared memory parallel works for many cases, but does not yield a significant improvement for the cases tested to date. Hybrid parallelization is currently under development, and it is best avoided at present. It should be noted that use of the Johnson and Jackson partial slip boundary condition (BC_JJ in mfix.dat file) causes a crash for all methods of parallelisation (although it works for serial computations).
The input for an MFIX case consists of an mfix.dat file, a text file which defines most or all of the properties of case (geometry, boundary and initial conditions, choice of turbulence and friction models, and so on), as well as any fortran source files containing user-extensions to the standard MFIX solver, and any additional optional files describing geometry. In most cases the mfix.dat file is sufficient, and as this file is relatively small, it may be uploaded using scp.
Below is an example of a simple PBS submit script for an mfix job.
#!/bin/sh #PBS -P projectid #PBS -l select=3:ncpus=24:mpiprocs=24:mem=12GB:nodetype=haswell_reg #PBS -q normal #PBS -l walltime=01:00:00 #PBS -o /mnt/lustre/users/username/job01/stdout #PBS -e /mnt/lustre/users/username/job01/stderr #PBS -m abe #PBS -M username@email.co.za module add chpc/compmech/mfix/20.1.0 cd ${PBS_O_WORKDIR} exe="mfix.exe" nproc=`cat $PBS_NODEFILE | wc -l` mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel >"coarse.log" 2>&1
Remember to edit or add the following line to your mfix.dat file:
NODESI = NX NODESJ = NY NODESK = NZ
where NX, NY and NZ should be replaced by the number of partitions along each physical principle axis of the model, so that NX*NY*NZ is the total number of cores requested in the submit script. In general, it is best to choose NX, NY and NZ such that largest number of partitions occur along the axis/axes corresponding roughly with the average flow direction.
Please be advised that there is now also a version 20.3.0 in /apps/chpc/compmech/CFD/MFIX/20.3.0
. There is a source script in that directory, setMFix, which will load the correct gcc and MPI modules and append the $PATH to make the executable mfixsolver
available. This binary supports both thread-level (OpenMP) and distributed memory (MPI) parallel. It uses mpich, and it is necessary to use the -iface ib0
option to ensure the use of the Infiniband network. A machinefile should not be used. The following job script should work, but has not been tested:
#!/bin/sh #PBS -P projectid #PBS -l select=3:ncpus=24:mpiprocs=24:mem=12GB:nodetype=haswell_reg #PBS -q normal #PBS -l walltime=01:00:00 #PBS -o /mnt/lustre/users/username/job01/stdout #PBS -e /mnt/lustre/users/username/job01/stderr #PBS -m abe #PBS -M username@email.co.za . /apps/chpc/compmech/CFD/MFIX/20.3.0/setMFix cd ${PBS_O_WORKDIR} exe="mfixsolver.exe" nproc=`cat $PBS_NODEFILE | wc -l` mpirun -iface ib0 -np $nproc $exe -parallel >"coarse.log" 2>&1
Please be advised that there is now also a version 22.2.2 in /home/apps/chpc/compmech/MFix-22.2.2
. The executable mfixsolver
can be added to your path with the following module command:
module load chpc/compmech/mfix/22.2.2
Please note that this has been compiled with gfortran-9.2.0 and MPICH-4.0. When running in parallel, add the option -iface ib0
to ensure that the high speed Infiniband network is used. This version of MPICH has been compiled with support for PBS, therefore a hosts or machine file is not needed. This command line should work:
mpirun -iface ib0 -np $nproc mfixsolver -f myinputfile.mfx
Mfix outputs a .RES file in the case directory, in addition to optional VTK files representing the solution. Both types of files may be opened using Paraview or a similar VTK viewer.
If postprocessing is to be done on CHPC hardware via the network, the following link may be helpful to get decent performance: Remote OpenGL visualization with TurboVNC and VirtualGL
If the postprocessing is to be performed on the user's local machine, the entire contents of the MFIX case directory should be copied to the user's machine using rsynch or scp.