Please refer to the Rocky DEM home page for more information on Rocky, a commercial code targeted at discrete element modelling. The current version of the code is v241, and has been installed in '/home/apps/chpc/compmech/ansys_inc/v241/rocky
'. Discrete element modelling lends itself to execution on GPUs, and the performance benefit is so compelling that there is little point in running the code on CPU
#!/bin/bash ## Lines starting with the # symbol are comments, unless followed by ! or PBS, ## in which case they are directives ## The following PBS directive requests one GPU node with two GPUs ## The environment variable $CUDA_VISIBLE_DEVICES will contain the indices of the GPUs assigned to this job ## #PBS -l select=1:ncpus=20:mpiprocs=20:ngpus=2 -q gpu_2 ## Specify your own project shortcode here ## #PBS -P MECH2211 ## The walltime should be a small overestimate of the expected run time ## Requesting a very long walltime may delay the start of your job ## If the requested walltime is too short, the job will be killed before it is finished ## #PBS -l walltime=2:00:00 ## Obviously use your own paths here ## #PBS -e /mnt/lustre/users/jblogs/Rocky_Run/stderr.txt #PBS -o /mnt/lustre/users/jblogs/Rocky_Run/stdout.txt ## These two lines will send you an email on Abort, Begin and End of the job ## Obviously use your own real email address ## #PBS -m abe #PBS -M jblogs@email.co.za ## Tell the system where your Rocky case is ## export PBS_JOBDIR=/mnt/lustre/users/jblogs/Rocky_Run ## Change into that directory ## cd $PBS_JOBDIR ## There is no license available on this particular port. Licensing needs to be negotiated with the software vendor. ## export LM_LICENSE_FILE=4321@chpclic1 ## Parse the environment variable to find the GPU(s) assigned to you, and pass this information to Rocky. ## Rocky does not need to know the indices of the actual GPUs, it just needs to correct number of ## sequential --gpu-num= entries, starting at 0 and incrementing by 1. PBS takes care of process placement. ## IFS=', ' read -r -a gpulist <<< "$CUDA_VISIBLE_DEVICES" gpulistparameter="" gpucount=0 for element in "${gpulist[@]}" do gpulistparameter=$gpulistparameter" --gpu-num="$gpucount let gpucount=$gpucount+1 done echo $gpulistparameter ## Run the solver ## /home/apps/chpc/compmech/ansys_inc/v241/rocky/Rocky --simulate "./MyRockyCase.rocky" --resume=1 --use-gpu=1 $gpulistparameter
The V100 GPUs in the GPU cluster are not available for graphics use. We are working on making the GUI available on the GPU cluster by means of Mesa software rendering, but this does not work yet. Please use the dedicated visualization servers chpcviz1 or chpclic1 if you need to use the GUI or to display results. Follow the remote visualization instructions for the TurboVNC / VirtualGL software stack.