User Tools

Site Tools


howto:starccm

STAR-CCM+

New procedure for versions from 19.02.009 onwards

There has been a major change to the underlying dependencies of versions of STAR-CCM+ released from 2024, starting with 19.02.009. Specifically, the software now requires glibc-2.28 or higher. This is not compatible with the CentOS-7 operating system used on the Lengau cluster. The workaround is to package the software installation in a Singularity container.

The mixed precision and double precision 19.02.009 versions are installed in the singularity image files:

/home/apps/chpc/compmech/Siemens/star19_02_009.sif 

and

/home/apps/chpc/compmech/Siemens/star19_02_009_r8.sif 

respectively.

Similarly, other recent versions are also in singularity image files:

/home/apps/chpc/compmech/Siemens/star19_02_012.sif
/home/apps/chpc/compmech/Siemens/star19_02_012_r8.sif 
/home/apps/chpc/compmech/Siemens/star19_04_007.sif
/home/apps/chpc/compmech/Siemens/star19_04_007_r8.sif
/home/apps/chpc/compmech/Siemens/star19.06.008.sif
/home/apps/chpc/compmech/Siemens/star19.06.008_r8.sif
/home/apps/chpc/compmech/Siemens/star20.02.007.sif
/home/apps/chpc/compmech/Siemens/star20.02.007_r8.sif

Please list the contents of the directory /home/apps/chpc/compmech/Siemens to see what versions are installed and what the files are called. Siemens is in the process of transitioning to a new version-numbering scheme, which is why we have symbolic links to offer the image files with more consistent names. The latest version is 2502.007, but it is also referred to as 20.02.007. Go figure.

To enable the Singularity environment on all the compute nodes used in a run, it is necessary to insert the following statement in your .bashrc file:

export PATH=/usr/local/apps/singularity/3.5.3/bin:$PATH

Example jobscript using Singularity and 19.04.007

runSTAR.qsub
#!/bin/bash
#PBS -l select=3:ncpus=24:mpiprocs=24
#PBS -q normal
#PBS -P MECH1234
#PBS -l walltime=01:00:00
#PBS -o /mnt/lustre/users/jblogs/StarTesting/star.out
#PBS -e /mnt/lustre/users/jblogs/StarTesting/star.err
export PBS_JOBDIR=/mnt/lustre/users/jblogs/StarTesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
cat $PBS_NODEFILE > hosts.txt
singularity exec /home/apps/chpc/compmech/Siemens/star2502.007.sif starccm+  -licpath 1999@dtn -batch run -rsh ssh -mpi intel -power -podkey <use own 22-char PoDkey> -np $nproc -machinefile hosts.txt testCase.sim > run.out
Notes on the above jobscript
  1. Activating the singularity image automatically sets the path to the starccm+ executable.
  2. This procedure only works with Intel MPI. OpenMPI launches its remote worker processes in a different way, which is not compatible with this procedure.
  3. The singularity environment is not available on the visualisation nodes chpclic1 and chpcviz1. If you need an interactive environment, get a VNC session on a normal interactive compute node and launch the interface with singularity exec /home/apps/chpc/compmech/Siemens/star19_04_007.sif starccm+ -graphics mesa_swr -rthreads 12 to use Mesa software rendering.
  4. The PBS machinefile environment variable $PBS_NODEFILE is not visible inside the singularity container. For this reason we “cat” the content to a local file and pass that to starccm+.
  5. Starting from version 19.04.007, the executables installed on Lengau were compiled with the clang compiler from llvm 17 instead of gcc. This should make no discernible difference, but please let us know if you find anything anomalous.

Post 19.02 versions and FSI

Some users have reported that they are unable to run FSI analyses using more than one node. If you are still experiencing such difficulties, please get in touch and provide sample output files. We have, however, now found one solution, which is to set the temporary write directory for the direct solver to the current directory, instead of a dedicated tmp directory. This setting must be made in the sim file, and can be accessed under the direct solver's setting in the GUI. Please also refer to this page from the manual:

Supplementary guidelines

Here is a short video tutorial which demonstrates how to run STAR-CCM+ interactively on a compute node, in order to set up a case, generate a mesh or to do graphical post-processing.

Here are some notes, courtesy of Christiaan de Wet of Aerotherm, the South African distributor of the software.

CHPC Installation

The CHPC has an installation of STAR-CCM+, but with no license. Approved users can use the software on the CHPC cluster, but need to work off their own licenses. We have retired older versions. We continue to offer 15.06.008, 16.02.008, 16.06.008, 17.02.007, 17.04.007, 17.06.007, 18.02.008, 18.04.008 and 18.06.006 in mixed and double precision versions. 13.04.010 is available in mixed precision only. If you need a different version, please contact CHPC support through the CHPC website.

Installation

Find the different versions under /home/apps/chpc/compmech/Siemens/. Although the CHPC acknowledges that STAR-CCM+ is now a Siemens product, the existing installation directory name /apps/chpc/compmech/CFD/CD-adapco/ will be kept in order to prevent breaking existing scripts. A symbolic link has been added, so that all versions are also available under /apps/chpc/compmech/CFD/Siemens

Licensing

Because the cluster compute nodes do not have direct access to the internet, it is necessary to use ssh-tunneling through another node to contact the user's license server, which will need to have the license manager and vendor daemon ports open. Tunneling is done via a separate node, either chpclic1 or dtn. Please apply to CHPC Support in order to supply the public IP address and port numbers of your license server so that the CHPC firewall can be configured to allow the necessary traffic. You will need to take similar firewall configuration measures at your end. Also please add the lines at the end of the script top kill your ssh-tunnels on completion of the job.

For the majority of users simply checking out PoD licenses from the international STAR-CCM+ license servers, it is easiest to use the CHPC's permanent tunnels on either dtn or chpclic1 (soon to be retired). As per the examples scripts given below, you can simply set the license variable to point to 1999@dtn or 1999@chpclic1.

Running a STAR-CCM+ Job (pre-2024)

On the CHPC clusters all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue and machine.

Example job script:

runSTAR.qsub
#!/bin/bash
##### The following line will request 4 (virtual) nodes, each with 24 cores running 24 mpi processes for 
##### a total of 96-way parallel. 
#PBS -l select=4:ncpus=24:mpiprocs=24
#PBS -q normal
##### Supply YOUR programme code in the next line
#PBS -P MECH0000
#PBS -l walltime=1:00:00
#PBS -o /mnt/lustre/users/username/starccmtesting/star.out
#PBS -e /mnt/lustre/users/username/starccmtesting/star.err
##### The following two lines will send the user an e-mail when the job aborts, begins or ends.
#PBS -m abe
#PBS -M username@email.co.za
#####  Set up path.  
export PATH=/apps/chpc/compmech/CFD/Siemens/18.06.006/STAR-CCM+18.06.006/star/bin:$PATH
####  Tell solver where to look for the license.  
####  dtn is correct here, there are ssh tunnels from dtn to the Siemens license servers.  
####  We are following a belts, braces and modest 
####  underwear approach here by specifying the LM and CDLMD license files as well as giving a license path on the 
####  command line.
export LM_LICENSE_FILE=1999@dtn
export CDLMD_LICENSE_FILE=1999@dtn
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/mnt/lustre3p/users/username/starccmtesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
#### This is a minimal run instruction, 
####  it will run the solver until reaching the stopping criteria set in the sim file.
starccm+ -licpath 1999@dtn -batch run -power -podkey [your 22-character podkey] -rsh ssh -np $nproc -machinefile $PBS_NODEFILE simulationfilename.sim > run.out

Running STAR-CCM+ on GPU (experimental)

Within fairly strict limitations, it is now possible to run STAR-CCM+ on Nvidia GPUs instead of CPUs. We have good news and bad news for you about this. The good news is that the performance is spectacularly good and can be regarded as game-changing. A single V100 card has more or less the same performance as 8 Lengau compute nodes with 192 cores. The bad news items are:

  • Only some physics models and solvers are supported, refer to the STAR-CCM+ documentation for more information
  • The CHPC's GPU cluster is very small, consisting of only 30 Nvidia V100 cards, distributed over several compute nodes
  • Although multi-GPU running works well, the CHPC's GPU cluster does not currently allow multi-GPU runs that span multiple nodes. It is also only worth doing on nodes where the GPUs are connected through NVlink, rather than the PCIe bus.
  • At this stage, the CHPC is not opening up the GPU resources to STAR-CCM+ users in general, but watch this space.

Setting up STAR-CCM+ runs on the GPU cluster

The most important thing to bear in mind is that there should be one MPI rank for each GPU. No more and no less. The GPU nodes have lots of CPU cores, so you may as well assign 10 CPU cores for each GPU, or just 1, it does not matter. The entire job runs on GPU, although each GPU requires one CPU core to control it.

Please note that the walltime limit on the GPU queues is just 12 hours. The GPUs have limited amounts of memory, so if your job mysteriously crashes, the most probable cause is inadequate memory. Most of the cards have only 16GB, although there are some with 32GB. For this reason, do not use double precision unless you really need it.

There are separate GPU queues for 1, 2, 3 and 4 GPU jobs. Ensure that you use the correct one.

Once the job is running, find the hostname of the node where your job is running with qstat -n1 followed by the job number. Then ssh into that node and monitor the GPU activity with nvidia-smi, or, more usefully, with nvtop. There is a module for nvtop:

 module load chpc/compmech/nvtop/1.2.2

Alternatively, just give the full path to nvtop or set up an alias for it:

/apps/chpc/compmech/nvtop/bin/nvtop

Single GPU example script

runStarccm1GPU.qsub
#!/bin/bash
#PBS -l select=1:ncpus=2:mpiprocs=1:ngpus=1
#PBS -q gpu_1
#PBS -P MECH1234
#PBS -l walltime=02:00:00
#PBS -o /mnt/lustre/users/jblogs/starccmGPUcase/star_1gpu.out
#PBS -e /mnt/lustre/users/jblogs/starccmGPUcase/star_1gpu.err
export LM_LICENSE_FILE=1999@dtn
export CDLMD_LICENSE_FILE=1999@dtn
# Edit this next line to select the appropriate version.
export PATH=/home/apps/chpc/compmech/Siemens/17.06.007/STAR-CCM+17.06.007/star/bin:$PATH
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/mnt/lustre/users/jblogs/starccmGPUcase
cd $PBS_JOBDIR
starccm+ -licpath 1999@dtn -batch run -power -podkey 12345ABCDE09876fghijk0 -rsh ssh -np $nproc -machinefile $PBS_NODEFILE -gpgpu auto:1 mysimfile.sim > run.out

Triple GPU example script

runstarccm3GPU.qsub
#!/bin/bash
#PBS -l select=1:ncpus=6:mpiprocs=3:ngpus=3
#PBS -q gpu_3
#PBS -P MECH1234
#PBS -l walltime=02:00:00
#PBS -o /mnt/lustre/users/jblogs/starccmGPUcase/star_3gpu.out
#PBS -e /mnt/lustre/users/jblogs/starccmGPUcase/star_3gpu.err
export LM_LICENSE_FILE=1999@dtn
export CDLMD_LICENSE_FILE=1999@dtn
# Edit this next line to select the appropriate version.
export PATH=/home/apps/chpc/compmech/Siemens/17.06.007/STAR-CCM+17.06.007/star/bin:$PATH
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/mnt/lustre/users/jblogs/starccmGPUcase
cd $PBS_JOBDIR
starccm+ -licpath 1999@dtn -batch run -power -podkey 12345ABCDE09876fghijk0 -rsh ssh -np $nproc -machinefile $PBS_NODEFILE -gpgpu auto:3 mysimfile.sim > run.out

STAR-CCM+ performance on NVidia-V100 GPUs

This graph illustrates the difference in performance between the GPUs and CPUs:

Running a Design Manager Job

STAR-CCM+ design sweeps and optimisations can be done with Design Manager. Here is an example of a suitable PBS script for running on two nodes. Think carefully about the license usage, each design point analysis will consume a solver process.

runStarCCM.qsub
#!/bin/bash
#PBS -l select=2:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -P MECH1234
#PBS -q normal
#PBS -l walltime=8:00:00
#PBS -o /mnt/lustre/users/username/DMExample/dm.out
#PBS -e /mnt/lustre/users/username/DMExample/dm.err
export STARCCMHOME=/apps/chpc/compmech/CFD/Siemens/15.04.008/STAR-CCM+15.04.008
export PYTHONHOME=$STARCCMHOME/designmanager/Ver2017.04/Python27
export HEEDS_ROOT=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver
export LD_LIBRARY_PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64:$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$LD_LIBRARY_PATH
export PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$STARCCMHOME/star/bin:$PATH
##### Set working directory.
export PBS_JOBDIR=/mnt/lustre3p/users/username/DMExample
cd $PBS_JOBDIR
#### Get the total number of cores
nproc=`cat $PBS_NODEFILE | wc -l`
# Build SSH tunnels for DM license server.  
# Use the existing tunnels on chpclic1 or dtn for the PoD license
# Assume that DM license is in port 27120 on the server license.unseen.ac.za 
# STAR-CCM+ will look first for a PoD license
# Bear in mind that each design point analysis will consume a solver process - this may 
#  consume PoD time very quickly
masternode=`hostname -s`
ssh -f jblogs@chpclic1 -L *:27120:license.unseen.ac.za:27120 -N
ssh -f jblogs@chpclic1 -L *:27121:license.unseen.ac.za:27121 -N
export CDLMD_LICENSE_FILE=1999@dtn:27120@$masternode
export LM_LICENSE_FILE=1999@dtn:27120@$masternode
##### Starccm+ run instruction.
starlaunch jobmanager  --command "$STARCCMHOME/star/bin/starccm+ -licpath 1999@$masternode:27120@$masternode -power -podkey <22-characterPoDKey>  -rsh ssh -batch run -mpi intel -np $nproc -machinefile $PBS_NODEFILE  ExampleDM_Project.dmprj" --slots 0  --batchsystem pbs  >> disasstar.out 2>&1
### Remove the ssh tunnels:
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `

In the Design Manager set up, under Design Study - Settings - Run Settings, give appropriate values to Simultaneous Jobs and Compute Processes. Their product should be the same as the number of cores (nodes X mpiprocs) available to the run.

Under Design Study - Settings - Compute Resources, set Type as Direct.

Monitoring a STAR-CCM+ Job

There are different ways of monitoring your solution. Please refer to the page on Remote Visualization for instructions on how to get a VNC session on a visualization server or on a compute node. Add the path to the VirtualGL installation with the environment setting: 'export PATH=$PATH:/opt/VirtualGL/bin' Use qsub -f <job number> to find the actual compute node being used as master node for your simulation. Start up the STAR-CCM+ GUI in your VNC session with the command line 'vglrun starccm+ -rsh ssh -mesa'. The vglrun wrapper and the -mesa command line option are both required to ensure that OpenGL works in the VNC session, unless you simply started up your X-session with vglrun. Once you have the GUI open, you can connect to the solver process on your compute node, and you will be able to monitor your run and generate images.

Be careful of leaving an idle GUI open, especially when using a Power on Demand license, where the license is wallclock hour based. We have set up a 1 hour inactive time limit in order to try and limit the damage.

Meshing on a compute node

If you need to build a large mesh interactively, it is probably best to do so interactively by means of a compute node, because most of the compute nodes (those numbered below 1000) have 128 GB of RAM. If you really need to build a big mesh, consider using one of the fat nodes, which have 1 TB of RAM each. Using a compute node for a task that requires graphics involves a little bit of trickery, but is really not that difficult, and the process is demonstrated in this Video.

Getting use of a compute node

Obtain exclusive use of a compute node by logging into Lengau according to your usual method, and obtaining an interactive session:

qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00

Obviously replace MECH1234 with the shortname of your particular Research Programme. Note down the name of the compute node that you have been given, let us use cnode0123 for this example. You can also use an interactive session like this to perform “service” tasks, such as archiving or compressing data files, which will be killed when attempted on the login node.

Getting a graphics-capable session on a compute node

There are three ways of doing this. As of 14 May 2019, the preferred method is to use a VNC session on the compute node:

  • VNC session on the compute node. Please read the instructions
  • X-forwarding by means of a VNC session
  • X-forwarding in two stages

X-forwarding in two stages is really only a practical proposition if you are on a fast, low-latency connection into the Sanren network. Otherwise, get the VNC session first by following these instructions.

Double X-forwarding

From an X-windows capable workstation (in other words, from a Linux terminal command prompt, or an emulator on Windows that includes an X-server, such as MobaXterm), log in to scp.chpc.ac.za:

 ssh -X jblogs@scp.chpc.ac.za 

We use scp instead of lengau for this purpose, because the login node will not permit sustained high bandwidth network traffic. Once logged in, do a second X-forwarding login to your assigned compute node:

 ssh -X cnode0123 
X-forwarding from the VNC session

A normal broadband connection will probably be too slow to use the double X-forwarding method. In this case, first get the VNC desktop going, as described above, and open a terminal. From this terminal, log in to your assigned compute node:

 ssh -X cnode0123 

Licensing

The compute nodes do not have direct internet access, so it is necessary to tunnel through chpclic1 to establish contact with the license server, in this case we will use the Power on Demand license servers of the company formerly known as CD-Adapco. There are two approaches that can be used here. One is to set up your own tunnels. Use your own Lengau login ID, unless you happen to be Joe Blogs.

ssh -f jblogs@chpclic1 -L 1999:flex.cd-adapco.com:1999 -N
ssh -f jblogs@chpclic1 -L 2099:flex.cd-adapco.com:2099 -N
export LM_LICENSE_FILE=1999@localhost
export CDLMD_LICENSE_FILE=1999@localhost

However, because there are a lot of users on Lengau using STAR-CCM+ this way, we have set up permanent ssh tunnels through both chpclic1 and dtn. That means that the power on demand license appears to be available on chpclic1 and dtn, so there is no need for your own tunnels.

export LM_LICENSE_FILE=1999@chpclic1
export CDLMD_LICENSE_FILE=1999@chpclic1

The server chpclic1 will shortly be retired, therefore we recommend that you use dtn instead.

export LM_LICENSE_FILE=1999@dtn
export CDLMD_LICENSE_FILE=1999@dtn

Set up the path

Obviously set the path to the version that you are using:

export PATH=/apps/chpc/compmech/CFD/Siemens/12.04.010-R8/STAR-CCM+12.04.010-R8/star/bin:$PATH

Run starccm+

You can now simply start the program in the usual way, with the command

 starccm+ -graphics mesa -rthreads 12 & 

Do the usual things you need to in order to access your PoD key. Thanks to the magic of Mesa, you have access to the GUI as well as the graphics capability of the interface. Parallel mesa is supported in the more recent versions of STAR-CCM+, and you may use up to 16 threads for rendering. In this example we have specified 12 rendering threads. The parallel mesa graphics performance can not match that of a powerful GPU, but it is quite respectable, and even supports the advanced rendering options.

Kill the ssh-tunnels

If you have made your own ssh tunnels, when the process has been completed, find the process ID's of the two ssh-tunnels

 ps ax | grep ssh 

and kill them with a command like

 kill -9 12345 12346 

assuming that 12345 and 12346 are the process ID's of the two tunnels.

Alternatively, use the following command to find the find a tunnel's process ID and kill it, issue once per tunnel:

kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
/app/dokuwiki/data/pages/howto/starccm.txt · Last modified: 2025/03/04 11:36 by ccrosby