User Tools

Site Tools


howto:ansysmechanical

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
howto:ansysmechanical [2018/10/18 14:38]
ccrosby
howto:ansysmechanical [2018/10/18 14:51]
ccrosby
Line 6: Line 6:
  
 The Ansys RSM (Remote Solver Management) method has turned out to be tricky to use with the cluster scheduler, and it is far easier to use the GUI software to write input files for the mechanical solver. ​ This can be done by selecting the appropriate Solver section in the tree in the Mechanical Interface, and then using the menu options ''​Tools - Write Input File''​ to create a suitably named .dat file in a directory of your choice. ​ Once this has been done, the GUI and Workbench are no longer needed. ​ A PBS script (see below) must then be created in the appropriate directory, and it can be submitted to the cluster with the usual qsub command. ​ On completion of the run, the result files can be loaded into the Mechanical GUI for post-processing. The Ansys RSM (Remote Solver Management) method has turned out to be tricky to use with the cluster scheduler, and it is far easier to use the GUI software to write input files for the mechanical solver. ​ This can be done by selecting the appropriate Solver section in the tree in the Mechanical Interface, and then using the menu options ''​Tools - Write Input File''​ to create a suitably named .dat file in a directory of your choice. ​ Once this has been done, the GUI and Workbench are no longer needed. ​ A PBS script (see below) must then be created in the appropriate directory, and it can be submitted to the cluster with the usual qsub command. ​ On completion of the run, the result files can be loaded into the Mechanical GUI for post-processing.
 +
 +===== Example job script =====
 +
 +<file bash runMechanical.qsub>​
 +#!/bin/bash
 +##### The following line will request 2 (virtual) nodes, each with 24 cores running 24 mpi processes for
 +##### a total of 48-way parallel. ​ Specifying memory requirement is unlikely to be necessary, as the 
 +##### compute nodes have 128 GB each.
 +#PBS -l select=2:​ncpus=24:​mpiprocs=24:​mem=120GB:​nodetype=haswell_reg
 +## For your own benefit, try to estimate a realistic walltime request. ​ Over-estimating the 
 +## wallclock requirement interferes with efficient scheduling, will delay the launch of the job,
 +## and ties up more of your CPU-time allocation untill the job has finished.
 +#PBS -q normal
 +#PBS -P myprojectcode
 +#PBS -l walltime=1:​00:​00
 +#PBS -o /​home/​username/​scratch/​MechTesting/​mechJob.out
 +#PBS -e /​home/​username/​scratch/​MechTesting/​mechJob.err
 +#PBS -m abe
 +#PBS -M username@email.co.za
 +##### Running commands
 +### Tell it where to find the license
 +export LM_LICENSE_FILE=1055@chpclic1
 +export ANSYSLMD_LICENSE_FILE=1055@chpclic1
 +### You may need the Intel compiler to be available
 +module load chpc/​parallel_studio_xe/​18.0.2/​2018.2.046
 +### Mesa is needed to provide libGLU.so
 +module load chpc/​compmech/​mesa/​18.1.9
 +#### There is no -d option available under PBS Pro, therefore ​
 +#### explicitly set working directory and change to that.
 +export PBS_JOBDIR=/​home/​username/​scratch/​MechTesting
 +cd $PBS_JOBDIR
 +### Count the number of lines in the machinefile
 +nproc=`cat $PBS_NODEFILE | wc -l`
 +### Get a file with only one line per host
 +cat $PBS_NODEFILE | sort -u > hostlist
 +### Create a machinefile with the number of MPI processes per node appended to each hostname
 +sed '​s/​.cm.cluster/:​24/​g' ​ hostlist > hosts
 +### Select the solver that you want to run
 +exe=/​apps/​chpc/​compmech/​CFD/​ansys_inc/​v190/​ansys/​bin/​ansys190
 +### Command to execute the solver
 +$exe -b nolist -s noread -i name_of_input_file.dat -o some_output.out -np $nproc -dis -machinefile hosts 
 +</​file>​
  
  
  
/var/www/wiki/data/pages/howto/ansysmechanical.txt · Last modified: 2018/10/18 14:59 by ccrosby