User Tools

Site Tools


howto:mfix

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
howto:mfix [2014/02/13 12:50]
agill
howto:mfix [2021/05/05 11:46] (current)
ccrosby [MFix-20.3.0]
Line 1: Line 1:
 +======Using MFIX at the CHPC======
 ===== Introduction ===== ===== Introduction =====
-MFIX is an open-source multiphase flow solver written in FORTRAN 90. It is used for simulating fluid-solid systems such as fluidized beds. As it appears to have numerous potential uses in chemical engineering and mineral processing applications,​ a limited level of support for the use of the package is now available ​from the CHPC. At present, MFIX is not installed globally, so users must install it to their home directory. If there is sufficient interest in the code, it will be installed globally.+MFIX is an open-source multiphase flow solver written in FORTRAN 90. It is used for simulating fluid-solid systems such as fluidized beds. As it appears to have numerous potential uses in chemical engineering and mineral processing applications,​ a limited level of support for the use of the package is now available ​at the CHPC, at /​apps/​chpc/​compmech/​CFD/​MFIX .
  
 +Alternatively,​ users may download the code and install it in their home directory.
 A one-time free registration is required prior to downloading the source code. To A one-time free registration is required prior to downloading the source code. To
 register, go to the MFIX website at https://​mfix.netl.doe.gov,​ click on "​Register"​ at the register, go to the MFIX website at https://​mfix.netl.doe.gov,​ click on "​Register"​ at the
 bottom of the home page or go directly to https://​mfix.netl.doe.gov/​registration.php. bottom of the home page or go directly to https://​mfix.netl.doe.gov/​registration.php.
  
-This wiki page details those issues specific to running MFIX on the Sun cluster at CHPC. +This wiki page details those issues specific to running MFIX on the Lengau ​cluster at CHPC. 
-For a more general explanation of the use of MFIX, consult the documentation and example cases included with +For a more general explanation of the use of MFIX, consult the documentation and example cases included with the source tarball.
-the source tarball.+
  
  
 ===== Building MFIX ===== ===== Building MFIX =====
-Unpack the tarball in an appropriate place in your home directory.  +MFIX generates a new executable for each case, which is copied ​in to the case directory. 
-The mfix executable is built and copied to the current case directory by the following command:+First cd to the case directory. 
 +Next, add modules for your choice of gcc version and the corresponding version of MPI, for instance: 
 + 
 +  module add gcc/5.1.0 chpc/​openmpi/​gcc/​65/​1.8.8-gcc5.1.0 
 + 
 +Finally, the mfix executable is built and copied to the current case directory by the following command: 
 + 
 +  sh /​apps/​chpc/​compmech/​CFD/​MFIX/​mfix/​model/​make_mfix 
 + 
 +if using the system mfix, or
  
   sh <​mfix_install_dir>/​model/​make_mfix   sh <​mfix_install_dir>/​model/​make_mfix
 +
 +for other install locations.
  
 This script prompts the user to specify a number of compile-time options, in particular the compiler used, This script prompts the user to specify a number of compile-time options, in particular the compiler used,
Line 22: Line 34:
  
 At present, parallel distributed memory appears to work best; reasonable scaling seems to be obtained. At present, parallel distributed memory appears to work best; reasonable scaling seems to be obtained.
-Shared memory parallel works for many cases, but does not yield a significant improvement for the cases tested to date. Hybrid ​parallelisation ​is currently under development,​ and it is best avoided at present.+Shared memory parallel works for many cases, but does not yield a significant improvement for the cases tested to date. Hybrid ​parallelization ​is currently under development,​ and it is best avoided at present.
 It should be noted that use of the Johnson and Jackson partial slip boundary condition (BC_JJ in mfix.dat file) causes a crash for all methods of parallelisation (although it works for serial computations). It should be noted that use of the Johnson and Jackson partial slip boundary condition (BC_JJ in mfix.dat file) causes a crash for all methods of parallelisation (although it works for serial computations).
  
Line 31: Line 43:
 Below is an example of a simple PBS submit script for an mfix job. Below is an example of a simple PBS submit script for an mfix job.
  
 +<file bash mfix.qsub>​
   #!/bin/sh   #!/bin/sh
-  #PBS -l select=1:mpiprocs=4:jobtype=westmere,​place=free:group=nodetype +  ​#PBS -P projectid 
-  #PBS -l select=1: +  ​#PBS -l select=3:ncpus=24:mpiprocs=24:mem=12GB:nodetype=haswell_reg ​ 
-  #PBS -l walltime=100:00:00 +  #PBS -q normal 
-  #PBS -q workq +  #PBS -l walltime=01:00:00 
-  #PBS -m be +  #PBS -o /​mnt/​lustre3p/​users/​username/​job01/​stdout 
-  #PBS -V +  #PBS -e /​mnt/​lustre3p/​users/​username/​job01/​stderr 
-  #PBS -o /​export/​home/​agill/​local/​opt/​mfix/​mintek/​grid_res/​coarse/​coarse.out +  #PBS -m abe 
-  ​#PBS -e /export/​home/​agill/​local/​opt/mfix/mintek/​grid_res/​coarse/​coarse.err+  #PBS -M username@email.co.za 
 +  ​module add chpc/compmech/mfix/20.1.0
   cd ${PBS_O_WORKDIR}   cd ${PBS_O_WORKDIR}
   exe="​mfix.exe"​   exe="​mfix.exe"​
   nproc=`cat $PBS_NODEFILE | wc -l`   nproc=`cat $PBS_NODEFILE | wc -l`
   mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel >"​coarse.log"​ 2>&1   mpirun -np $nproc -machinefile $PBS_NODEFILE $exe -parallel >"​coarse.log"​ 2>&1
 +</​file>  ​
 Remember to edit or add the following line to your mfix.dat file: Remember to edit or add the following line to your mfix.dat file:
  
Line 51: Line 65:
 where NX, NY and NZ should be replaced by the number of partitions along each physical principle axis of the model, so that NX*NY*NZ is the total number of cores requested in the submit script. In general, it is best to choose NX, NY and NZ where NX, NY and NZ should be replaced by the number of partitions along each physical principle axis of the model, so that NX*NY*NZ is the total number of cores requested in the submit script. In general, it is best to choose NX, NY and NZ
 such that largest number of partitions occur along the axis/axes corresponding roughly with the average flow direction. such that largest number of partitions occur along the axis/axes corresponding roughly with the average flow direction.
 +
 +====  MFix-20.3.0 ====
 +Please be advised that there is now also a version 20.3.0 in ''/​apps/​chpc/​compmech/​CFD/​MFIX/​20.3.0''​. ​ There is a source script in that directory, setMFix, which will load the correct gcc and MPI modules and append the $PATH to make the executable ''​mfixsolver''​ available. ​ This binary supports both thread-level (OpenMP) and distributed memory (MPI) parallel. ​ It uses mpich, and it is necessary to use the ''​-iface ib0''​ option to ensure the use of the Infiniband network. ​ A machinefile should not be used.  The following job script should work, but has not been tested:
 +
 +<file bash mfix.qsub>​
 +  #!/bin/sh
 +  #PBS -P projectid
 +  #PBS -l select=3:​ncpus=24:​mpiprocs=24:​mem=12GB:​nodetype=haswell_reg ​
 +  #PBS -q normal
 +  #PBS -l walltime=01:​00:​00
 +  #PBS -o /​mnt/​lustre3p/​users/​username/​job01/​stdout
 +  #PBS -e /​mnt/​lustre3p/​users/​username/​job01/​stderr
 +  #PBS -m abe
 +  #PBS -M username@email.co.za
 +  . /​apps/​chpc/​compmech/​CFD/​MFIX/​20.3.0/​setMFix
 +  cd ${PBS_O_WORKDIR}
 +  exe="​mfixsolver.exe"​
 +  nproc=`cat $PBS_NODEFILE | wc -l`
 +  mpirun -iface ib0 -np $nproc $exe -parallel >"​coarse.log"​ 2>&1
 +</​file>  ​
  
 ===== Postprocessing ===== ===== Postprocessing =====
-Mfix outputs a .RES file in the case directory, in addition to optional VTK files representing the solution. Both types of files may be opened using Paraview or a similar VTK viewer. ​Paraview ​is installed at the following location ​on the Sun cluster:+Mfix outputs a .RES file in the case directory, in addition to optional VTK files representing the solution. Both types of files may be opened using Paraview or a similar VTK viewer. 
 + 
 +If postprocessing ​is to be done on CHPC hardware via the network, the following link may be helpful to get decent performance: 
 +[[remote_viz|Remote OpenGL visualization with TurboVNC and VirtualGL]]
  
-  /​opt/​gridware/​applications/​OpenFOAM/​ParaView-4.0.1-Linux-64bit/​bin/​paraview 
-  ​ 
 If the postprocessing is to be performed on the user's local machine, the entire contents of the MFIX case directory should be copied to the user's machine using rsynch or scp.  If the postprocessing is to be performed on the user's local machine, the entire contents of the MFIX case directory should be copied to the user's machine using rsynch or scp. 
  
/var/www/wiki/data/attic/howto/mfix.1392288640.txt.gz · Last modified: 2014/02/13 12:50 by agill