User Tools

Site Tools


howto:ansys

ANSYS/Fluent

The CHPC has an installation of Ansys-CFD along with a limited license for academic use only. The license covers use of the Fluent and CFX solvers, as well as the IcemCFD meshing code. Versions 16.2, 17.0, 17.1, 17.2, 18.0, 18.1, 18.2, 19.0, 19.1 and 19.2 are available.

Application Process

If you are a full time student or staff at an academic institution then you may request access to use Ansys-CFD on the CHPC cluster. Please go to the CHPC user database to register and request resources. Commercial use of Ansys software at the CHPC is also possible, but software license resources need to be negotiated directly with Ansys or their local agents. Remote license check-out has not been ruled out by Ansys, but once again this needs to be negotiated with the software vendor.

Installation

Version 19.0 editions of the Ansys software are installed under '/apps/chpc/compmech/CFD/ansys_inc/v190' and Version 17.2 under '/apps/chpc/compmech/CFD/ansys_inc/v172'. Similarly, Version 16.2 editions of the Ansys software are installed under '/apps/chpc/compmech/CFD/ansys_inc/v162', etc.

Licensing

Please note that the license has been upgraded, making more resources available. We will monitor use and advise users accordingly.

CHPC has academic licenses for AnsysCFD. There are 25 “solver” processes, available as aa_r_cfd and 2048 “HPC” licenses, available as aa_r_hpc. A license resource management system has been activated. If you request license resources (as in these example scripts), the scheduler will check for license availability before starting a job. License unavailability will result in the job being held back until the necessary licenses have become available. Although use of the license resource request is not mandatory, its use is strongly recommended. If you do not use the license resource requests, the job will fail if no licenses are available. A single aa_r_cfd license is required to start the solver, and includes up to 16 HPC licenses. Therefore you should request ($nproc-16) aa_r_hpc licenses.

The Fluent licenses are in general highly utilised. The consequence is that jobs may be held back due to unavailability of licenses. It is possible for the CHPC to forcefully apply measures that will ensure fair use. However, in order to avoid this situation, please stick to the following guidelines:

  • Each Ansys user shall submit only one PBS script (tying up one aa_r_cfd) at any given time.
  • Given the constraint of the number of solver licenses, do your Fluent runs sequentially. Do not try to run more than one Fluent analysis at a time.
  • Take full advantage of the large number of cores available on Lengau to run each Fluent analysis faster. Without requesting special permission, you are entitled to use 240 cores for a Fluent run. Our testing has indicated very good parallel scaling down to 10000 grid cells per core (sometimes even less), which means that for any run over about 2 million cells, you should aim to use around 200 cores.
  • If you need to submit a series of jobs, do so with a dependence on previously submitted jobs. The syntax is as follows: qsub -W depend=afterany:123456 thisjob.pbs, where 123456 should be replaced with the number of the previously submitted job, and thisjob.pbs is simply the name of the new script that you are submitting. The afterany directive will make sure that the dependent job gets launched regardless of whether the running job has finished normally, crashed or been killed.
  • You can launch several fluent solver processes sequentially inside a single PBS script. Simply add in the necessary cd (change directory) and fluent 3d …. etc. lines.
  • If the progress of your work is being limited by the number of licenses available at the CHPC, consider moving some of the runs to open source software.

Running a Fluent Job

On the CHPC cluster all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue.

Example job script:

runFluent.qsub
#!/bin/bash
##### The following line will request 9 (virtual) nodes, each with 24 cores running 24 mpi processes for
##### a total of 216-way parallel.  Specifying memory requirement is unlikely to be necessary, as the 
##### compute nodes have 128 GB each.
#PBS -l select=9:ncpus=24:mpiprocs=24:mem=32GB:nodetype=haswell_reg
#### Check for license availability.  If insufficient licenses are available, job will be held back untill 
####  licenses are available. 
#PBS -l aa_r_cfd=1
#PBS -l aa_r_hpc=200
## For your own benefit, try to estimate a realistic walltime request.  Over-estimating the 
## wallclock requirement interferes with efficient scheduling, will delay the launch of the job,
## and ties up more of your CPU-time allocation untill the job has finished.
#PBS -q normal
#PBS -P myprojectcode
#PBS -l walltime=1:00:00
#PBS -o /home/username/scratch/FluentTesting/fluent.out
#PBS -e /home/username/scratch/FluentTesting/fluent.err
#PBS -m abe
#PBS -M username@email.co.za
##### Running commands
#### Put these commands in your .bashrc file as well, to ensure that the compute nodes
#### have the correct environment.  Ensure that any OpenFOAM-related environment
#### settings have been removed. 
####### PLEASE NOTE THAT THE LICENSE SERVER ID HAS NOW CHANGED, IT IS NOW chpclic1
export LM_LICENSE_FILE=1055@chpclic1
export ANSYSLMD_LICENSE_FILE=1055@chpclic1
# Edit this next line to select the appropriate version. V 19.2, 19.1, 19.0, 18.2, 18.1, 18.0, V 17.2, V 17.1, V 17.0 and 16.2 are available.
export PATH=/apps/chpc/compmech/CFD/ansys_inc/v192/fluent/bin:$PATH
export FLUENT_ARCH=lnamd64
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/home/username/scratch/FluentTesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
exe=fluent
$exe 3d -t$nproc -pinfiniband -ssh -cnf=$PBS_NODEFILE -g < fileContainingTUIcommands > run.out

There are two methods which can be used to submit a series of instructions to Fluent. In the above example, a file containing so-called “TUI” commands is passed to Fluent, either by the “<” redirection symbol, or with the “-i” command line option. There are two disadvantages to using this method:

  • It is not possible to simply record a journal file from the Fluent GUI, as these commands require the GUI to be open, and will not work with the “-g” command line option.
  • It is not possible to generate images during the computation. Instead, these have to be created interactively afterwards.

The second method allows the use of a recorded journal file and also supports “on the fly” generation of images. A dummy X-windows session has to be started with Xvfb (X virtual frame buffer) and set as a virtual display. This virtual session is automatically terminated at the end of the job. The following is an example of a PBS job script using this method:

runFluent.qsub
#!/bin/bash
##### The following line will request 9 (virtual) nodes, each with 24 cores running 24 mpi processes for
##### a total of 216-way parallel.
#PBS -l select=9:ncpus=24:mpiprocs=24:mem=32GB:nodetype=haswell_reg
#### License resource request
#PBS -l aa_r_cfd=1
#PBS -l aa_r_hpc=200
## For your own benefit, try to estimate a realistic walltime request.  Over-estimating the 
## wallclock requirement interferes with efficient scheduling, will delay the launch of the job,
## and ties up more of your CPU-time allocation untill the job has finished.
#PBS -q normal
#PBS -P myprojectcode
#PBS -l walltime=1:00:00
#PBS -o /home/username/scratch/FluentTesting/fluent.out
#PBS -e /home/username/scratch/FluentTesting/fluent.err
#PBS -m abe
#PBS -M username@email.co.za
##### Running commands
#### Put these commands in your .bashrc file as well, to ensure that the compute nodes
#### have the correct environment.  Ensure that any OpenFOAM-related environment
#### settings have been removed. 
####### PLEASE NOTE THAT THE LICENSE SERVER ID HAS NOW CHANGED, IT IS chpclic1
export LM_LICENSE_FILE=1055@chpclic1
export ANSYSLMD_LICENSE_FILE=1055@chpclic1
# Edit this next line to select the appropriate version.  V 19.2, 19.1, 19.0, 18.2, 18.1, V 18.0, V 17.2, V 17.1, V 17.0 and 16.2 are available.
export PATH=/apps/chpc/compmech/CFD/ansys_inc/v192/fluent/bin:$PATH
export FLUENT_ARCH=lnamd64
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/home/username/scratch/FluentTesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
exe=fluent
### X11 required to save images.  Use Virtual frame buffer, which needs to be killed after completion
/bin/Xvfb :1 &
export DISPLAY=:1
$exe 3d -t$nproc -pinfiniband -ssh -cnf=$PBS_NODEFILE -i journalFile.jou > run.out
kill -9 %1

If a GUI is required

Some tasks, such as setting up runs, meshing or post-processing may require a graphics-capable login. This is possible in a number of ways. Using a compute node for a task that requires graphics involves a little bit of trickery, but is really not that difficult.

Getting use of a compute node

Obtain exclusive use of a compute node by logging into Lengau according to your usual method, and obtaining an interactive session:

qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00

Obviously replace MECH1234 with the shortname of your particular Research Programme. Note down the name of the compute node that you have been given, let us use cnode0123 for this example. You can also use an interactive session like this to perform “service” tasks, such as archiving or compressing data files, which will be killed when attempted on the login node.

Getting a graphics-capable session on a compute node

There are two ways of doing this:

  • X-forwarding by means of a VNC session
  • X-forwarding in two stages

X-forwarding in two stages is really only a practical proposition if you are on a fast, low-latency connection into the Sanren network. Otherwise, get the VNC session first by following these instructions.

Double X-forwarding

From an X-windows capable workstation (in other words, from a Linux terminal command prompt, or an emulator on Windows that includes an X-server, such as MobaXterm, or a VNC session on one of the visualization nodes), log in to Lengau:

 ssh -X jblogs@lengau.chpc.ac.za 

Once logged in, do a second X-forwarding login to your assigned compute node:

 ssh -X cnode0123 

. Alternatively, you can also do an interactive PBS session with X-forwarding:

 qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00 -X 
X-forwarding from the VNC session

A normal broadband connection will probably be too slow to use the double X-forwarding method. In this case, first get the VNC desktop going, as described above, and open a terminal. From this terminal, log in to your assigned compute node:

 ssh -X cnode0123 

Set up the appropriate environment

export LM_LICENSE_FILE=1055@chpclic1
export ANSYSLMD_LICENSE_FILE=1055@chpclic1
export PATH=/apps/chpc/compmech/CFD/ansys_inc/v181/fluent/bin:$PATH
export FLUENT_ARCH=lnamd64

Run fluent

You can now simply start the program in the usual way, with the command

 fluent 3d -t24 -ssh 

Thanks to the magic of software rendering, you have access to the GUI and graphics capability of the interface.

Remote Solution Monitoring and Control

Starting with Version 19.0 of the software, it is now possible to use a GUI to connect to a Fluent process that is already running. The process requires that the Fluent be started with access to an X-server, therefore use a PBS script that contains the instructions to start up and remove a virtual frame buffer. Here is a minimalist example of such a script:

runFluentWith_flremote.qsub
#!/bin/bash
#PBS -l select=5:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -q normal
#PBS -P MECH1234
#PBS -l walltime=12:00:00
#PBS -o /home/user/lustre/FluentTest/fluent.out
#PBS -e /home/user/lustre/FluentTest/fluent.err
#PBS -l aa_r_cfd=1
#PBS -l aa_r_hpc=104
/bin/Xvfb :1 &
export DISPLAY=:1
export LM_LICENSE_FILE=1055@chpclic1
export ANSYSLMD_LICENSE_FILE=1055@chpclic1
export PATH=$PATH:/apps/chpc/compmech/CFD/ansys_inc/190/fluent/bin
export FLUENT_ARCH=lnamd64
cd /home/user/lustre/FluentTest
nproc=`cat $PBS_NODEFILE | wc -l`
fluent 3ddp -t$nproc -pinfiniband -ssh -mpi=intel -cnf=$PBS_NODEFILE  -i runCommands.txt | tee fluentrun.out
kill -9 %1

It is critical that the file containing the run instructions, in this case called runCommands.txt, has the following line:

server/start-server server-info.txt

This will create a file called server-info.txt, which contains the hostname of the master node, as well as a port number which the remote client will need to connect to.

On the viz node (you have a TurboVNC session open, right?), get a terminal, change directory to where your Fluent run is, and issue the following command:

/opt/VirtualGL/bin/vglrun /apps/chpc/compmech/CFD/ansys_inc/v190/fluent/bin/flremote &

The Fluent Remote Visualization Client will start up. Provide the appropriate Server Info Filename and you will be able to connect to your Fluent process.

Different methods of uploading a simulation

Build and test locally, upload .cas file

The “standard” process assumes that the user already has a local license for the software.

  • Mesh and pre-process the simulation as usual for a local simulation.
  • Test it locally to ensure that everything works properly. Be cautious about absolute path file names.
  • Compress the case file, either with gzip or saving it as a .cas.gz file.
  • Upload to CHPC using either scp or rsync. The advantage of rsync is that the transfer can be made persistent, to prevent network communication glitches from killing the file transfer.

Build and test locally, upload geometry and script files only, mesh and pre-process remotely

If your simulations files are too large, or your internet connection too slow, consider transferring geometry and script files only. This will require careful scripting and testing, but is certainly practical.

  • There are two methods available for meshing on the CHPC system. Either work with IcemCFD or use the built-in T-Grid based meshing in Fluent itself. Neither ANSYS-Mesh nor Gambit is available on the CHPC system.
  • If using the internal Fluent meshing, it will be necessary to transfer the surface grid and a file containing the necessary Fluent meshing and job set up instructions.
  • If using IcemCFD, transfer the Icem .prj, .tin, .fbc and .blk (if using hexa) files, along with a recorded Icem script file for generating the mesh and exporting the file in Fluent format. Watch for absolute path names in the script file. Run IcemCFD with the -batch -script options to create the mesh. A more comprehensive Fluent script will be required to import the mesh and pre-process the case. Test locally!
  • If your internet connection is too slow to permit easy case uploading, it will also be far too slow for downloading the results files. Consider generating post-processing images “on the fly”, or alternatively exporting only surface data on completion of the simulation.

General tips and advice

  • Give some thought to the resources being requested. Partitioning a simulation too finely will not necessarily speed it up as expected. Although our tests indicate that Fluent scales well down to as low as 15 000 cells per core, please give some thought to license usage. The Fluent license on the cluster is a shared resource, and using too many of the available 1024 parallel processes may delay the launch of your job, or delay others. Refer to the graphs below to get a better quantitative indication of scaling. Commercial users should also take into account that best performance per node will be achieved by using the full 24 cores per node, but performance per core benefits substantially from using less than 24.
  • A request for a smaller number of cores may result in the job launching earlier, resulting in reduced turn-around time, even if the job takes longer to run.
  • Monitoring convergence of batch jobs can be painful but necessary.
  • Monitor files (such as cd or cl files) can be plotted with gnuplot even if no Fluent GUI is available. On a slow connection, consider using gnuplot with set term dumb to get funky 1970's style ASCII graphics.
  • If you need to submit a large number of small jobs, when doing a parametric study, for example, please use Job Arrays. Refer to the PBS-Pro guide at http://wiki.chpc.ac.za/quick:pbspro for guidance on how to set this up.

/var/www/wiki/data/pages/howto/ansys.txt · Last modified: 2018/09/28 11:06 by ccrosby