User Tools

Site Tools


howto:starccm

STAR-CCM+

WARNING: Choice of MPI implementation

Although we have obtained best scaling to large core counts when using “platform” MPI, we experienced incompatibilities between STAR-CCM+ and platform MPI. While we get to the bottom of this, please revert to intel MPI. Our latest experience is that the current MPI implementations work with Version 14.04.011:

  • intel
  • platform
  • openmpi

Please let us know if you experience any difficulties related to the choice of MPI implementation.

CHPC Installation

The CHPC has an installation of STAR-CCM+, but with no license. Approved users can use the software on the CHPC cluster, but need to work off their own licenses. We have retired versions older than 12.04.011, but continue to offer 12.04.011, 12.06.010, 12.06.011, 13.02.011, 13.02.013, 13.04.010, 13.06.010, 13.06.011, 13.06.012, 14.02.010 and 14.04.011 in mixed and double precision versions. If you need a different version, please contact CHPC support through the CHPC website.

Installation

Find the different versions under '/apps/chpc/compmech/CFD/Siemens/'. Note that from version 11.06.010 the structure of the installation path has changed, so that the 11.06.010 software versions are installed under the directories 11.06.010 and 11.060.010-R8 respectively, and not directly under the Siemens directory. Although the CHPC acknowledges that STAR-CCM+ is now a Siemens product, the existing installation directory name '/apps/chpc/compmech/CFD/CD-adapco/' will be kept in order to prevent breaking existing scripts. A symbolic link has been added, so that all versions are available under '/apps/chpc/compmech/CFD/Siemens'

Licensing

Because the cluster compute nodes do not have direct access to the internet, it is necessary to use ssh-tunneling through another node to contact the user's license server, which will need to have the license manager and vendor daemon ports open. Tunneling is done via a separate node, chpclic1. Please apply to CHPC Support in order to supply the public IP address and port numbers of your license server so that the CHPC firewall can be configured to allow the necessary traffic. You will need to take similar firewall configuration measures at your end. Also please add the lines at the end of the script top kill your ssh-tunnels on completion of the job.

Running a STAR-CCM+ Job

On the CHPC clusters all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue and machine.

Example job script:

runSTAR.qsub
#!/bin/bash
##### The following line will request 10 (virtual) nodes, each with 24 cores running 24 mpi processes for 
##### a total of 240-way parallel. 
#PBS -l select=10:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -q normal
##### Supply YOUR programme code in the next line
#PBS -P MECH0000
#PBS -l walltime=1:00:00
#PBS -o /home/username/scratch/starccmtesting/star.out
#PBS -e /home/username/scratch/starccmtesting/star.err
##### The following two lines will send the user an e-mail when the job aborts, begins or ends.
#PBS -m abe
#PBS -M username@email.co.za
#####  Set up path.  
#####  Drop the "-R8" part if you want to use the mixed precision version of the code.
export PATH=/apps/chpc/compmech/CFD/Siemens/13.06.011-R8/STAR-CCM+13.06.011-R8/star/bin:$PATH
####  Tell solver where to look for the license.  
####  chpclic1 is correct here, there are ssh tunnels from chclic1 to the Siemens license servers.  
####  We are following a belts, braces and modest 
####  underwear approach here by specifying the LM and CDLMD license files as well as giving a license path on the 
####  command line.
export LM_LICENSE_FILE=1999@chpclic1
export CDLMD_LICENSE_FILE=1999@chpclic1
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/home/username/scratch/starccmtesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
#### This is a minimal run instruction, 
####  it will run the solver until reaching the stopping criteria set in the sim file.
####  Please note that intel MPI should be used, instead of platform MPI, which has been giving trouble
starccm+ -licpath 1999@chpclic1 -batch run -power -podkey [your 22-character podkey] -mpi intel -fabric IBV -rsh ssh -np $nproc -machinefile $PBS_NODEFILE simulationfilename.sim > run.out

Running a Design Manager Job

STAR-CCM+ design sweeps and optimisations can be done with Design Manager. Here is an example of a suitable PBS script for running on two nodes. Think carefully about the license usage, each design point analysis will consume a solver process.

runStarCCM.qsub
#!/bin/bash
#PBS -l select=2:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -P MECH1234
#PBS -q normal
#PBS -l walltime=8:00:00
#PBS -o /home/jblogs/lustre/DMExample/dm.out
#PBS -e /home/jblogs/lustre/DMExample/dm.err
export STARCCMHOME=/apps/chpc/compmech/CFD/Siemens/12.04.011/STAR-CCM+12.04.011
export PYTHONHOME=$STARCCMHOME/designmanager/Ver2017.04/Python27
export HEEDS_ROOT=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver
export LD_LIBRARY_PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64:$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$LD_LIBRARY_PATH
export PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$STARCCMHOME/star/bin:$PATH
##### Set working directory.
export PBS_JOBDIR=/home/jblogs/lustre/DMExample
cd $PBS_JOBDIR
#### Get the total number of cores
nproc=`cat $PBS_NODEFILE | wc -l`
# Build SSH tunnels for license servers.  
#  You may also prefer to use the existing tunnels on chpclic1, see the simpler example above
# Assume that DM license is in port 27120 on the server license.unseen.ac.za 
# STAR-CCM+ will look first for a PoD license
# Bear in mind that each design point analysis will consume a solver process - this may 
#  consume PoD time very quickly
masternode=`hostname -s`
ssh -f jblogs@chpclic1 -L *:1999:flex.cd-adapco.com:1999 -N
ssh -f jblogs@chpclic1 -L *:2099:flex.cd-adapco.com:2099 -N
ssh -f jblogs@chpclic1 -L *:27120:license.unseen.ac.za:27120 -N
ssh -f jblogs@chpclic1 -L *:27121:license.unseen.ac.za:27121 -N
export CDLMD_LICENSE_FILE=1999@$masternode:27120@$masternode
export LM_LICENSE_FILE=1999@$masternode:27120@$masternode
##### Starccm+ run instruction.
starlaunch jobmanager  --command "$STARCCMHOME/star/bin/starccm+ -licpath 1999@$masternode:27120@$masternode -power -podkey <22-characterPoDKey>  -rsh ssh -batch run -mpi intel -np $nproc -machinefile $PBS_NODEFILE  ExampleDM_Project.dmprj" --slots 0  --batchsystem pbs  >> disasstar.out 2>&1
### Remove the ssh tunnels:
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `

In the Design Manager set up, under Design Study - Settings - Run Settings, give appropriate values to Simultaneous Jobs and Compute Processes. Their product should be the same as the number of cores (nodes X mpiprocs) available to the run.

Under Design Study - Settings - Compute Resources, set Type as Direct.

Monitoring a STAR-CCM+ Job

There are different ways of monitoring your solution. Please refer to the page on Remote Visualization for instructions on how to get a VNC session on a visualization server or on a compute node. Add the path to the VirtualGL installation with the environment setting: 'export PATH=$PATH:/opt/VirtualGL/bin' Use qsub -f <job number> to find the actual compute node being used as master node for your simulation. Start up the STAR-CCM+ GUI in your VNC session with the command line 'vglrun starccm+ -rsh ssh -mesa'. The vglrun wrapper and the -mesa command line option are both required to ensure that OpenGL works in the VNC session, unless you simply started up your X-session with vglrun. Once you have the GUI open, you can connect to the solver process on your compute node, and you will be able to monitor your run and generate images.

Be careful of leaving an idle GUI open, especially when using a Power on Demand license, where the license is wallclock hour based. We have set up a 1 hour inactive time limit in order to try and limit the damage.

Meshing on a compute node

If you need to build a large mesh interactively, it is probably best to do so interactively by means of a compute node, because most of the compute nodes (those numbered below 1000) have 128 GB of RAM. If you really need to build a big mesh, consider using one of the fat nodes, which have 1 TB of RAM each. Using a compute node for a task that requires graphics involves a little bit of trickery, but is really not that difficult.

Getting use of a compute node

Obtain exclusive use of a compute node by logging into Lengau according to your usual method, and obtaining an interactive session:

qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00

Obviously replace MECH1234 with the shortname of your particular Research Programme. Note down the name of the compute node that you have been given, let us use cnode0123 for this example. You can also use an interactive session like this to perform “service” tasks, such as archiving or compressing data files, which will be killed when attempted on the login node.

Getting a graphics-capable session on a compute node

There are three ways of doing this. As of 14 May 2019, the preferred method is to use a VNC session on the compute node:

  • VNC session on the compute node. Please read the instructions
  • X-forwarding by means of a VNC session
  • X-forwarding in two stages

X-forwarding in two stages is really only a practical proposition if you are on a fast, low-latency connection into the Sanren network. Otherwise, get the VNC session first by following these instructions.

Double X-forwarding

From an X-windows capable workstation (in other words, from a Linux terminal command prompt, or an emulator on Windows that includes an X-server, such as MobaXterm), log in to Lengau:

 ssh -X jblogs@lengau.chpc.ac.za 

Once logged in, do a second X-forwarding login to your assigned compute node:

 ssh -X cnode0123 
X-forwarding from the VNC session

A normal broadband connection will probably be too slow to use the double X-forwarding method. In this case, first get the VNC desktop going, as described above, and open a terminal. From this terminal, log in to your assigned compute node:

 ssh -X cnode0123 

Licensing

The compute nodes do not have direct internet access, so it is necessary to tunnel through chpclic1 to establish contact with the license server, in this case we will use the Power on Demand license servers of the company formerly known as CD-Adapco. Use your own Lengau login ID, unless you happen to be Joe Blogs.

ssh -f jblogs@chpclic1 -L 1999:flex.cd-adapco.com:1999 -N
ssh -f jblogs@chpclic1 -L 2099:flex.cd-adapco.com:2099 -N
export LM_LICENSE_FILE=1999@localhost
export CDLMD_LICENSE_FILE=1999@localhost

Set up the path

Obviously set the path to the version that you are using:

export PATH=/apps/chpc/compmech/CFD/Siemens/12.04.010-R8/STAR-CCM+12.04.010-R8/star/bin:$PATH

Run starccm+

You can now simply start the program in the usual way, with the command

 starccm+ 

Do the usual things you need to in order to access your PoD key. Thanks to the magic of Mesa, you have access to the GUI and graphics capability of the interface.

Kill the ssh-tunnels

When the process has been completed, find the process ID's of the two ssh-tunnels

 ps ax | grep ssh 

and kill them with a command like

 kill -9 12345 12346 

assuming that 12345 and 12346 are the process ID's of the two tunnels.

Alternatively, use the following command to find the find a tunnel's process ID and kill it, issue once per tunnel:

kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
/var/www/wiki/data/pages/howto/starccm.txt · Last modified: 2019/09/26 13:45 by ccrosby