Ansys Forte fluid-dynamics software specializes in the simulation of combustion processes in an internal combustion engine, using a highly efficient coupling of detailed chemical kinetics, liquid fuel spray and turbulent gas dynamics. An Ansys Forte simulation solves the full Reynolds-averaged Navier-Stokes (RANS) equations with well established flow turbulence models. Its use on the CHPC cluster is covered by the academic “top-up” license that the CHPC provides to academic users who already have access to the software at their own institutions. If you are not currently an approved Ansys user at the CHPC, please submit a helpdesk request for access to the software.
Ansys software is installed in the directory /home/apps/chpc/compmech/ansys_inc
, with many versions being available. The current latest release is in the sub-directory v241
. The path to the current Forte installation is
/home/apps/chpc/compmech/ansys_inc/v241/reaction
.
#!/bin/bash # # followed by PBS is a PBS directive # In this case we are asking for 2 full 24-core nodes, test performance and adjust to suit #PBS -l select=2:ncpus=24:mpiprocs=24 # The job will timeout after 3 hours, adjust to suit #PBS -l walltime=3:00:00 # Select queue # -q serial for 1 to 23 cores on a single node # -q smp for 1 full 24-core node # -q normal for 2 to 10 nodes # -q large for 11 to 100 nodes # -q express for paying commercial users, up to 100 nodes #PBS -q normal # Use YOUR project shortcode, NOT MECH1234! #PBS -P MECH1234 # Set these paths to your lustre space, working directory, and out and error streams #PBS -e /mnt/lustre/users/jblogs/Forte/forte.err #PBS -o /mnt/lustre/users/jblogs/Forte/forte.out # Set the license environment variable - this is for academic use and available on request only export LM_LICENSE_FILE=1055@login1 # Change to your working directory cd /mnt/lustre/users/jblogs/Forte # Set up the forte environment . /home/apps/chpc/compmech/ansys_inc/v232/reaction/forte.linuxx8664/bin/forte_setup.ksh # Load up the locally installed Intel module. This provides a PBS-integrated MPI implementation. module load chpc/parallel_studio_xe/2020u1 # Count the total number of lines in the hosts file to get the number of MPI ranks nproc=`cat $PBS_NODEFILE | wc -l` # Pre-process the job. Use your own filename, obviously! forte.sh CLI -project blogsTest.ftsim --prepare -mpi_args $nproc # Launch the job and wait for it to complete. Use your own ... forte.sh CLI -project blogsTest.ftsim --submit --wait
The HPC cluster is not particularly well-suited to pre- and post-processing tasks, but it is possible to get access to the Forte GUI on the system. The critical requirement is for a graphics-enabled environment. Please refer to our Wiki pages on remote visualization. The full GUI will only work on one of the graphics-card equipped visualization servers chpcviz1 or chpclic1. Our recommendation is to use the TurboVNC software stack, combined with VirtualGL. Unfortunately, it is not currently possible to use a more powerful compute node for this purpose, as the Mesa OpenGL software rendering is not compatible with the VNC virtual desktop software. It is possible to run the monitoring tool using VNC on a compute node, or even simply with X-forwarding.