User Tools

Site Tools


howto:mfix

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
howto:mfix [2014/11/05 12:37]
agill
howto:mfix [2021/05/05 11:46] (current)
ccrosby [MFix-20.3.0]
Line 1: Line 1:
 ======Using MFIX at the CHPC====== ======Using MFIX at the CHPC======
 ===== Introduction ===== ===== Introduction =====
-MFIX is an open-source multiphase flow solver written in FORTRAN 90. It is used for simulating fluid-solid systems such as fluidized beds. As it appears to have numerous potential uses in chemical engineering and mineral processing applications,​ a limited level of support for the use of the package is now available at the CHPC, at /opt/gridware/non-supported/mfix.+MFIX is an open-source multiphase flow solver written in FORTRAN 90. It is used for simulating fluid-solid systems such as fluidized beds. As it appears to have numerous potential uses in chemical engineering and mineral processing applications,​ a limited level of support for the use of the package is now available at the CHPC, at /apps/chpc/compmech/CFD/​MFIX ​.
  
 Alternatively,​ users may download the code and install it in their home directory. Alternatively,​ users may download the code and install it in their home directory.
Line 8: Line 8:
 bottom of the home page or go directly to https://​mfix.netl.doe.gov/​registration.php. bottom of the home page or go directly to https://​mfix.netl.doe.gov/​registration.php.
  
-This wiki page details those issues specific to running MFIX on the Sun cluster at CHPC.+This wiki page details those issues specific to running MFIX on the Lengau ​cluster at CHPC.
 For a more general explanation of the use of MFIX, consult the documentation and example cases included with the source tarball. For a more general explanation of the use of MFIX, consult the documentation and example cases included with the source tarball.
  
Line 17: Line 17:
 Next, add modules for your choice of gcc version and the corresponding version of MPI, for instance: Next, add modules for your choice of gcc version and the corresponding version of MPI, for instance:
  
-  module add gcc/4.7.2 openmpi/openmpi-1.6.5_gcc-4.7.2+  module add gcc/5.1.0 chpc/openmpi/gcc/65/1.8.8-gcc5.1.0
  
 Finally, the mfix executable is built and copied to the current case directory by the following command: Finally, the mfix executable is built and copied to the current case directory by the following command:
  
-  sh /opt/gridware/non-supported/​mfix/​model/​make_mfix+  sh /apps/chpc/compmech/​CFD/​MFIX/​mfix/​model/​make_mfix
  
 if using the system mfix, or if using the system mfix, or
Line 34: Line 34:
  
 At present, parallel distributed memory appears to work best; reasonable scaling seems to be obtained. At present, parallel distributed memory appears to work best; reasonable scaling seems to be obtained.
-Shared memory parallel works for many cases, but does not yield a significant improvement for the cases tested to date. Hybrid ​parallelisation ​is currently under development,​ and it is best avoided at present.+Shared memory parallel works for many cases, but does not yield a significant improvement for the cases tested to date. Hybrid ​parallelization ​is currently under development,​ and it is best avoided at present.
 It should be noted that use of the Johnson and Jackson partial slip boundary condition (BC_JJ in mfix.dat file) causes a crash for all methods of parallelisation (although it works for serial computations). It should be noted that use of the Johnson and Jackson partial slip boundary condition (BC_JJ in mfix.dat file) causes a crash for all methods of parallelisation (although it works for serial computations).
  
Line 43: Line 43:
 Below is an example of a simple PBS submit script for an mfix job. Below is an example of a simple PBS submit script for an mfix job.
  
 +<file bash mfix.qsub>​
   #!/bin/sh   #!/bin/sh
-  #PBS -l select=1:mpiprocs=4:jobtype=westmere,​place=free:group=nodetype +  ​#PBS -P projectid 
-  #PBS -l select=1: +  ​#PBS -l select=3:ncpus=24:mpiprocs=24:mem=12GB:nodetype=haswell_reg ​ 
-  #PBS -l walltime=100:00:00 +  #PBS -q normal 
-  #PBS -q workq +  #PBS -l walltime=01:00:00 
-  #PBS -m be +  #PBS -o /​mnt/​lustre3p/​users/​username/​job01/​stdout 
-  #PBS -V +  #PBS -e /​mnt/​lustre3p/​users/​username/​job01/​stderr 
-  #PBS -o /​export/​home/​agill/​local/​opt/​mfix/​mintek/​grid_res/​coarse/​coarse.out +  #PBS -m abe 
-  ​#PBS -e /export/​home/​agill/​local/​opt/mfix/mintek/​grid_res/​coarse/​coarse.err+  #PBS -M username@email.co.za 
 +  ​module add chpc/compmech/mfix/20.1.0
   cd ${PBS_O_WORKDIR}   cd ${PBS_O_WORKDIR}
   exe="​mfix.exe"​   exe="​mfix.exe"​
   nproc=`cat $PBS_NODEFILE | wc -l`   nproc=`cat $PBS_NODEFILE | wc -l`
   mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel >"​coarse.log"​ 2>&1   mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel >"​coarse.log"​ 2>&1
 +</​file>  ​
 Remember to edit or add the following line to your mfix.dat file: Remember to edit or add the following line to your mfix.dat file:
  
Line 64: Line 66:
 such that largest number of partitions occur along the axis/axes corresponding roughly with the average flow direction. such that largest number of partitions occur along the axis/axes corresponding roughly with the average flow direction.
  
-===== Postprocessing ===== +====  ​MFix-20.3.0 ​==== 
-Mfix outputs ​a .RES file in the case directory, ​in addition ​to optional VTK files representing ​the solutionBoth types of files may be opened using Paraview or a similar VTK viewerParaview ​is installed at the following location on the Sun cluster:+Please be advised that there is now also version 20.3.in ''/​apps/​chpc/​compmech/​CFD/​MFIX/​20.3.0''​. ​ There is a source script in that directory, ​setMFix, which will load the correct gcc and MPI modules and append the $PATH to make the executable ''​mfixsolver''​ available This binary supports both thread-level (OpenMP) and distributed memory (MPI) parallel It uses mpich, and it is necessary to use the ''​-iface ib0''​ option to ensure ​the use of the Infiniband network. ​ A machinefile should not be used.  The following job script should work, but has not been tested:
  
-  ​/opt/gridware/applications/OpenFOAM/ParaView-4.0.1-Linux-64bit/bin/​paraview+<file bash mfix.qsub>​ 
 +  #!/bin/sh 
 +  #PBS -P projectid 
 +  #PBS -l select=3:​ncpus=24:​mpiprocs=24:​mem=12GB:​nodetype=haswell_reg  
 +  #PBS -q normal 
 +  #PBS -l walltime=01:​00:​00 
 +  #PBS -o /mnt/lustre3p/users/​username/​job01/​stdout 
 +  #PBS -e /​mnt/​lustre3p/​users/​username/​job01/​stderr 
 +  #PBS -m abe 
 +  #PBS -M username@email.co.za 
 +  . /​apps/​chpc/​compmech/​CFD/​MFIX/​20.3.0/setMFix 
 +  cd ${PBS_O_WORKDIR} 
 +  exe="​mfixsolver.exe" 
 +  nproc=`cat $PBS_NODEFILE | wc -l` 
 +  mpirun ​-iface ib0 -np $nproc $exe -parallel >"​coarse.log"​ 2>&​1 
 +</file> ​  
 + 
 +===== Postprocessing ===== 
 +Mfix outputs a .RES file in the case directory, in addition to optional VTK files representing the solution. Both types of files may be opened using Paraview or a similar VTK viewer.
  
 If postprocessing is to be done on CHPC hardware via the network, the following link may be helpful to get decent performance:​ If postprocessing is to be done on CHPC hardware via the network, the following link may be helpful to get decent performance:​
/var/www/wiki/data/attic/howto/mfix.1415183848.txt.gz · Last modified: 2014/11/05 12:37 by agill