We are in the process of retiring the existing license server chpclic1. The license has been moved to login1. Please change your .bashrc file and job scripts to point to the new license server, as per the example scripts below.
The CHPC has an installation of Ansys-CFD along with a limited license for academic use only. The license covers use of the Fluent and CFX solvers, as well as the IcemCFD meshing code. If you are a new Ansys user on Lengau, submit a helpdesk ticket, requesting access to the license. Versions before 17.2 have been retired, as well as 18.0, 18.1, 19.0 and 19.1, but Versions 17.2, 18.2, 19.2, 19.3, 19.4, 19.5, 20.1 and 20.2 are available. Please note that the default file format has changed from version 20.1.
If you are a full time student or staff at an academic institution then you may request access to use Ansys-CFD on the CHPC cluster. Please go to the CHPC user database to register and request resources. Commercial use of Ansys software at the CHPC is also possible, but software license resources need to be negotiated directly with Ansys or their local agents. Remote license check-out has not been ruled out by Ansys, but once again this needs to be negotiated with the software vendor.
Version 19.3 (or 2019R1 in AnsysSpeak) editions of the Ansys software are installed under '/apps/chpc/compmech/CFD/ansys_inc/v193
' and Version 17.2 under '/apps/chpc/compmech/CFD/ansys_inc/v172
', you get the idea.
Please note that the license has been upgraded, making more resources available. We will monitor use and advise users accordingly.
CHPC has academic licenses for AnsysCFD. If you are a new Ansys user on Lengau, submit a helpdesk ticket, requesting access to the license. There are 25 “solver” processes, available as aa_r_cfd and 2048 “HPC” licenses, available as aa_r_hpc. A license resource management system has been activated. If you request license resources (as in these example scripts), the scheduler will check for license availability before starting a job. License unavailability will result in the job being held back until the necessary licenses have become available. Although use of the license resource request is not mandatory, its use is strongly recommended. If you do not use the license resource requests, the job will fail if no licenses are available. A single aa_r_cfd license is required to start the solver, and includes up to 16 HPC licenses. Therefore you should request ($nproc-16) aa_r_hpc licenses. Do not request more than you need, it will delay the start of your job.
The Fluent licenses are in general highly utilised. The consequence is that jobs may be held back due to unavailability of licenses. It is possible for the CHPC to forcefully apply measures that will ensure fair use. However, in order to avoid this situation, please stick to the following guidelines:
qsub -W depend=afterany:123456 thisjob.pbs
, where 123456 should be replaced with the number of the previously submitted job, and thisjob.pbs is simply the name of the new script that you are submitting. The afterany directive will make sure that the dependent job gets launched regardless of whether the running job has finished normally, crashed or been killed.On the CHPC cluster all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue.
Example job script:
#!/bin/bash ##### The following line will request 9 (virtual) nodes, each with 24 cores running 24 mpi processes for ##### a total of 216-way parallel. Specifying memory requirement is unlikely to be necessary, as the ##### compute nodes have 128 GB each. #PBS -l select=9:ncpus=24:mpiprocs=24:mem=32GB:nodetype=haswell_reg #### Check for license availability. If insufficient licenses are available, job will be held back untill #### licenses are available. #PBS -l aa_r_cfd=1 #PBS -l aa_r_hpc=200 ## For your own benefit, try to estimate a realistic walltime request. Over-estimating the ## wallclock requirement interferes with efficient scheduling, will delay the launch of the job, ## and ties up more of your CPU-time allocation untill the job has finished. #PBS -q normal #PBS -P myprojectcode #PBS -l walltime=1:00:00 #PBS -o /home/username/scratch/FluentTesting/fluent.out #PBS -e /home/username/scratch/FluentTesting/fluent.err #PBS -m abe #PBS -M username@email.co.za ##### Running commands #### Put these commands in your .bashrc file as well, to ensure that the compute nodes #### have the correct environment. Ensure that any OpenFOAM-related environment #### settings have been removed. ####### PLEASE NOTE THAT THE LICENSE SERVER ID HAS NOW CHANGED, IT IS NOW login1 export LM_LICENSE_FILE=1055@login1 export ANSYSLMD_LICENSE_FILE=1055@login1 # Edit this next line to select the appropriate version. V 19.2, 19.1, 19.0, 18.2 and 17.2 are available. export PATH=/apps/chpc/compmech/CFD/ansys_inc/v192/fluent/bin:$PATH export FLUENT_ARCH=lnamd64 #### There is no -d option available under PBS Pro, therefore #### explicitly set working directory and change to that. export PBS_JOBDIR=/home/username/scratch/FluentTesting cd $PBS_JOBDIR nproc=`cat $PBS_NODEFILE | wc -l` exe=fluent $exe 3d -t$nproc -pinfiniband -ssh -cnf=$PBS_NODEFILE -g < fileContainingTUIcommands > run.out
There are two methods which can be used to submit a series of instructions to Fluent. In the above example, a file containing so-called “TUI” commands is passed to Fluent, either by the “<” redirection symbol, or with the “-i” command line option. There are two disadvantages to using this method:
The second method allows the use of a recorded journal file and also supports “on the fly” generation of images. A dummy X-windows session has to be started with Xvfb (X virtual frame buffer) and set as a virtual display. This virtual session is automatically terminated at the end of the job. The following is an example of a PBS job script using this method:
#!/bin/bash ##### The following line will request 9 (virtual) nodes, each with 24 cores running 24 mpi processes for ##### a total of 216-way parallel. #PBS -l select=9:ncpus=24:mpiprocs=24:mem=32GB:nodetype=haswell_reg #### License resource request #PBS -l aa_r_cfd=1 #PBS -l aa_r_hpc=200 ## For your own benefit, try to estimate a realistic walltime request. Over-estimating the ## wallclock requirement interferes with efficient scheduling, will delay the launch of the job, ## and ties up more of your CPU-time allocation untill the job has finished. #PBS -q normal #PBS -P myprojectcode #PBS -l walltime=1:00:00 #PBS -o /home/username/scratch/FluentTesting/fluent.out #PBS -e /home/username/scratch/FluentTesting/fluent.err #PBS -m abe #PBS -M username@email.co.za ##### Running commands #### Put these commands in your .bashrc file as well, to ensure that the compute nodes #### have the correct environment. Ensure that any OpenFOAM-related environment #### settings have been removed. ####### PLEASE NOTE THAT THE LICENSE SERVER ID HAS NOW CHANGED, IT IS login1 export LM_LICENSE_FILE=1055@login1 export ANSYSLMD_LICENSE_FILE=1055@login1 # Edit this next line to select the appropriate version. V 20.1, 19.5, 19.4, 19.3, 19.2, 19.1, 19.0, 18.2 and 17.2 are available. export PATH=/apps/chpc/compmech/CFD/ansys_inc/v192/fluent/bin:$PATH export FLUENT_ARCH=lnamd64 #### There is no -d option available under PBS Pro, therefore #### explicitly set working directory and change to that. export PBS_JOBDIR=/home/username/scratch/FluentTesting cd $PBS_JOBDIR nproc=`cat $PBS_NODEFILE | wc -l` exe=fluent ### X11 required to save images. Use Virtual frame buffer, which needs to be killed after completion /bin/Xvfb :1 & export DISPLAY=:1 $exe 3d -t$nproc -pinfiniband -ssh -cnf=$PBS_NODEFILE -i journalFile.jou > run.out kill -9 %1
Some tasks, such as setting up runs, meshing or post-processing may require a graphics-capable login. This is possible in a number of ways. Using a compute node for a task that requires graphics involves a little bit of trickery, but is really not that difficult.
Obtain exclusive use of a compute node by logging into Lengau according to your usual method, and obtaining an interactive session:
qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00
Obviously replace MECH1234 with the shortname of your particular Research Programme. Note down the name of the compute node that you have been given, let us use cnode0123 for this example. You can also use an interactive session like this to perform “service” tasks, such as archiving or compressing data files, which will be killed when attempted on the login node.
There are three ways of doing this:
X-forwarding in two stages is really only a practical proposition if you are on a fast, low-latency connection into the Sanren network. Otherwise, get the VNC session first by following these instructions.
From an X-windows capable workstation (in other words, from a Linux terminal command prompt, or an emulator on Windows that includes an X-server, such as MobaXterm, or a VNC session on one of the visualization nodes), log in to Lengau:
ssh -X jblogs@lengau.chpc.ac.za
Once logged in, do a second X-forwarding login to your assigned compute node:
ssh -X cnode0123
. Alternatively, you can also do an interactive PBS session with X-forwarding:
qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00 -X
A normal broadband connection will probably be too slow to use the double X-forwarding method. In this case, first get the VNC desktop going, as described above, and open a terminal. From this terminal, log in to your assigned compute node:
ssh -X cnode0123
export LM_LICENSE_FILE=1055@login1 export ANSYSLMD_LICENSE_FILE=1055@login1 export PATH=/apps/chpc/compmech/CFD/ansys_inc/v181/fluent/bin:$PATH export FLUENT_ARCH=lnamd64
You can now simply start the program in the usual way, with the command
fluent 3d -t24 -ssh
Thanks to the magic of software rendering, you have access to the GUI and graphics capability of the interface.
Starting with Version 19.0 of the software, it is now possible to use a GUI to connect to a Fluent process that is already running. The process requires that the Fluent be started with access to an X-server, therefore use a PBS script that contains the instructions to start up and remove a virtual frame buffer. Here is a minimalist example of such a script:
#!/bin/bash #PBS -l select=5:ncpus=24:mpiprocs=24:nodetype=haswell_reg #PBS -q normal #PBS -P MECH1234 #PBS -l walltime=12:00:00 #PBS -o /home/user/lustre/FluentTest/fluent.out #PBS -e /home/user/lustre/FluentTest/fluent.err #PBS -l aa_r_cfd=1 #PBS -l aa_r_hpc=104 /bin/Xvfb :1 & export DISPLAY=:1 export LM_LICENSE_FILE=1055@login1 export ANSYSLMD_LICENSE_FILE=1055@login1 export PATH=$PATH:/apps/chpc/compmech/CFD/ansys_inc/190/fluent/bin export FLUENT_ARCH=lnamd64 cd /home/user/lustre/FluentTest nproc=`cat $PBS_NODEFILE | wc -l` fluent 3ddp -t$nproc -pinfiniband -ssh -mpi=intel -cnf=$PBS_NODEFILE -i runCommands.txt | tee fluentrun.out kill -9 %1
It is critical that the file containing the run instructions, in this case called runCommands.txt, has the following line:
server/start-server server-info.txt
This will create a file called server-info.txt, which contains the hostname of the master node, as well as a port number which the remote client will need to connect to.
On the viz node (you have a TurboVNC session open, right?), get a terminal, change directory to where your Fluent run is, and issue the following command:
/opt/VirtualGL/bin/vglrun /apps/chpc/compmech/CFD/ansys_inc/v190/fluent/bin/flremote &
The Fluent Remote Visualization Client will start up. Provide the appropriate Server Info Filename and you will be able to connect to your Fluent process.
The “standard” process assumes that the user already has a local license for the software.
If your simulations files are too large, or your internet connection too slow, consider transferring geometry and script files only. This will require careful scripting and testing, but is certainly practical.
set term dumb
to get funky 1970's style ASCII graphics.