User Tools

Site Tools


howto:ansysmechanical

Ansys Mechanical at the CHPC

The Ansys Mechanical sofware is installed together with the CFD solvers in the directory /apps/chpc/compmech/CFD/ansys_inc. There is a separate sub-directory for each version: v160, v162, v170, v171, v172, v180, v181, v182, v190, v191 ans v192.

Setting up runs

The most efficient way of using the cluster is by means of pre-written files for the mechanical solver. However, if you need to use Workbench to set up the runs on Lengau, it will be necessary to use one of the visualisation nodes - chpcviz1 or chpclic1. Please refer to the Remote Visualisation Instructions for a method of getting a graphics-enabled remote desktop on Lengau. Workbench can be started with the command /opt/VirtualGL/bin/vglrun /apps/chpc/compmech/CFD/ansys_inc/v190/Framework/bin/Linux64/runwb2 & Please bear in mind that these are shared servers, and computationally intensive tasks are not permitted.

The Ansys RSM (Remote Solver Management) method has turned out to be tricky to use with the cluster scheduler, and it is far easier to use the GUI software to write input files for the mechanical solver. This can be done by selecting the appropriate Solver section in the tree in the Mechanical Interface, and then using the menu options Tools - Write Input File to create a suitably named .dat file in a directory of your choice. Once this has been done, the GUI and Workbench are no longer needed. A PBS script (see below) must then be created in the appropriate directory, and it can be submitted to the cluster with the usual qsub command. On completion of the run, the result files can be loaded into the Mechanical GUI for post-processing.

Example job script

runMechanical.pbs
#!/bin/bash
##### The following line will request 2 (virtual) nodes, each with 24 cores running 24 mpi processes for
##### a total of 48-way parallel.  Specifying memory requirement is unlikely to be necessary, as the 
##### compute nodes have 128 GB each.
#PBS -l select=2:ncpus=24:mpiprocs=24:mem=120GB:nodetype=haswell_reg
## For your own benefit, try to estimate a realistic walltime request.  Over-estimating the 
## wallclock requirement interferes with efficient scheduling, will delay the launch of the job,
## and ties up more of your CPU-time allocation untill the job has finished.
#PBS -q normal
#PBS -P myprojectcode
#PBS -l walltime=1:00:00
#PBS -o /home/username/scratch/MechTesting/mechJob.out
#PBS -e /home/username/scratch/MechTesting/mechJob.err
#PBS -m abe
#PBS -M username@email.co.za
##### Running commands
### Tell it where to find the license
export LM_LICENSE_FILE=1055@chpclic1
export ANSYSLMD_LICENSE_FILE=1055@chpclic1
### You may need the Intel compiler to be available
module load chpc/parallel_studio_xe/18.0.2/2018.2.046
### Mesa is needed to provide libGLU.so
module load chpc/compmech/mesa/18.1.9
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/home/username/scratch/MechTesting
cd $PBS_JOBDIR
### Count the number of lines in the machinefile
nproc=`cat $PBS_NODEFILE | wc -l`
### Get a file with only one line per host
cat $PBS_NODEFILE | sort -u > hostlist
### Create a machinefile with the number of MPI processes per node appended to each hostname
sed 's/.cm.cluster/:24/g'  hostlist > hosts
### Select the solver that you want to run
exe=/apps/chpc/compmech/CFD/ansys_inc/v190/ansys/bin/ansys190
### Command to execute the solver
$exe -b nolist -s noread -i name_of_input_file.dat -o some_output.out -np $nproc -dis -machinefile hosts 
/var/www/wiki/data/pages/howto/ansysmechanical.txt · Last modified: 2018/10/18 14:59 by ccrosby