User Tools

Site Tools


howto:starccm

STAR-CCM+

The CHPC has an installation of STAR-CCM+, but with no license. Approved users can use the software on the CHPC cluster, but need to work off their own licenses. Versions 10.02.010, 11.02.009, 11.02.010, 11.04.010, 11.06.010, 12.02.011, 12.04.010, 12.04.011, 12.06.010, 12.06.011 are currently available on the system in single and double precision versions. If you need a different version, please contact CHPC support through the CHPC website.

Installation

Find the different versions under '/apps/chpc/compmech/CFD/CD-adapco/'. Note that from version 11.06.010 the structure of the installation path has changed, so that the 11.06.010 software versions are installed under the directories 11.06.010 and 11.060.010-R8 respectively, and not directly under the CD-adapco directory. The CHPC acknowledges that STAR-CCM+ is now a Siemens product, but will keep the current installation directory name in order to avoid unnecessary confusion.

Licensing

Because the cluster compute nodes do not have direct access to the internet, it is necessary to use ssh-tunneling through another node to contact the user's license server, which will need to have the license manager and vendor daemon ports open. Tunneling is done via a separate node, chpclic1. Please apply to CHPC Support in order to supply the public IP address and port numbers of your license server so that the CHPC firewall can be configured to allow the necessary traffic. You will need to take similar firewall configuration measures at your end. Also please add the lines at the end of the script top kill your ssh-tunnels on completion of the job.

Running a STAR-CCM+ Job

On the CHPC clusters all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue and machine.

Example job script:

runSTAR.qsub
#!/bin/bash
##### The following line will request 10 (virtual) nodes, each with 24 cores running 24 mpi processes for 
##### a total of 240-way parallel. 
#PBS -l select=10:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -q normal
##### Supply YOUR programme code in the next line
#PBS -P MECH0000
#PBS -l walltime=1:00:00
#PBS -o /home/username/scratch/starccmtesting/star.out
#PBS -e /home/username/scratch/starccmtesting/star.err
##### The following two lines will send the user an e-mail when the job aborts, begins or ends.
#PBS -m abe
#PBS -M username@email.co.za
##### Set up path.  
##### Drop the "-R8" part if you want to use the single precision version of the code.
#####  Note the changed path from version 11.06.010 onwards.
export PATH=/apps/chpc/compmech/CFD/CD-adapco/12.06.011-R8/STAR-CCM+12.06.011-R8/star/bin:$PATH
##### set up ssh-tunnels to your license server. Obviously use the right port numbers. And server IP.
##### The port numbers and server IP used here are for CD-Adapco's Power on Demand server.  To use this you 
##### need a valid account and PoD key, which gets entered on the command line.
#### lmgrd daemon
ssh -f username@chpclic1 -L 1999:flex.cd-adapco.com:1999 -N
#### vendor daemon port
ssh -f username@chpclic1 -L 2099:flex.cd-adapco.com:2099 -N
#### Tell solver where to look for the license.  
####  localhost is correct here, it follows from the ssh-tunneling.  We are following a belts, braces and modest 
####  underwear approach here by specifying the LM and CDLMD license files as well as giving a license path on the 
####  command line.
export LM_LICENSE_FILE=1999@localhost
export CDLMD_LICENSE_FILE=1999@localhost
#### There is no -d option available under PBS Pro, therefore 
#### explicitly set working directory and change to that.
export PBS_JOBDIR=/home/username/scratch/starccmtesting
cd $PBS_JOBDIR
nproc=`cat $PBS_NODEFILE | wc -l`
#### This is a minimal run instruction, 
####  it will run the solver until reaching the stopping criteria set in the sim file.
starccm+ -licpath 1999@localhost -batch run -power -podkey [your 22-character podkey] -mpi platform -fabric IBV -rsh ssh -np $nproc -machinefile $PBS_NODEFILE simulationfilename.sim > run.out
### Now kill your tunnels, use one line per tunnel:
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `

Running a Design Manager Job

STAR-CCM+ design sweeps and optimisations can be done with Design Manager. Here is an example of a suitable PBS script for running on two nodes. Think carefully about the license usage, each design point analysis will consume a solver process.

runFluent.qsub
#!/bin/bash
#PBS -l select=2:ncpus=24:mpiprocs=24:nodetype=haswell_reg
#PBS -P MECH1234
#PBS -q normal
#PBS -l walltime=8:00:00
#PBS -o /home/jblogs/lustre/DMExample/dm.out
#PBS -e /home/jblogs/lustre/DMExample/dm.err
export STARCCMHOME=/apps/chpc/compmech/CFD/CD-adapco/12.04.011/STAR-CCM+12.04.011
export PYTHONHOME=$STARCCMHOME/designmanager/Ver2017.04/Python27:$PYTHONHOME
export HEEDS_ROOT=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$HEEDS_ROOT
export LD_LIBRARY_PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64:$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$LD_LIBRARY_PATH
export PATH=$STARCCMHOME/designmanager/Ver2017.04/LX64/solver:$STARCCMHOME/star/bin:$PATH
##### Set working directory.
export PBS_JOBDIR=/home/jblogs/lustre/DMExample
cd $PBS_JOBDIR
#### Get the total number of cores
nproc=`cat $PBS_NODEFILE | wc -l`
# Build SSH tunnels for license servers.  
# Assume that DM license is in port 27120 on the server license.unseen.ac.za 
# STAR-CCM+ will look first for a PoD license
# Bear in mind that each design point analysis will consume a solver process - this may 
#  consume PoD time very quickly
masternode=`hostname -s`
ssh -f jblogs@chpclic1 -L *:1999:flex.cd-adapco.com:1999 -N
ssh -f jblogs@chpclic1 -L *:2099:flex.cd-adapco.com:2099 -N
ssh -f jblogs@chpclic1 -L *:27120:license.unseen.ac.za:27120 -N
ssh -f jblogs@chpclic1 -L *:27121:license.unseen.ac.za:27121 -N
export CDLMD_LICENSE_FILE=1999@$masternode:27120@$masternode
export LM_LICENSE_FILE=1999@$masternode:27120@$masternode
##### Starccm+ run instruction.
starlaunch jobmanager  --command "$STARCCMHOME/star/bin/starccm+ -licpath 1999@$masternode:27120@$masternode -power -podkey <22-characterPoDKey>  -rsh ssh -batch run -mpi platform -np $nproc -machinefile $PBS_NODEFILE  ExampleDM_Project.dmprj" --slots 0  --batchsystem pbs  >> disasstar.out 2>&1
### Remove the ssh tunnels:
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `

In the Design Manager set up, under Design Study - Settings - Run Settings, give appropriate values to Simultaneous Jobs and Compute Processes. Their product should be the same as the number of cores (nodes X mpiprocs) available to the run.

Under Design Study - Settings - Compute Resources, set Type as Direct.

Monitoring a STAR-CCM+ Job

There are different ways of monitoring your solution. Please refer to the page on Remote Visualization for instructions on how to get a VNC session on the visualization server. Add the path to the VirtualGL installation with the environment setting: 'export PATH=$PATH:/opt/VirtualGL/bin' Use qsub -f <job number> to find the actual compute node being used as master node for your simulation. Start up the STAR-CCM+ GUI in your VNC session with the command line 'vglrun starccm+ -rsh ssh -mesa'. The vglrun wrapper and the -mesa command line option are both required to ensure that OpenGL works in the VNC session, unless you simply started up your X-session with vglrun. Once you have the GUI open, you can connect to the solver process on your compute node, and you will be able to monitor your run and generate images.

Be careful of leaving an idle GUI open, especially when using a Power on Demand license, where the license is wallclock hour based. We have set up a 1 hour inactive time limit in order to try and limit the damage.

Meshing on a compute node

If you need to build a large mesh interactively, it is probably best to do so interactively by means of a compute node, because most of the compute nodes (those numbered below 1000) have 128 GB of RAM. If you really need to build a big mesh, consider using one of the fat nodes, which have 1 TB of RAM each. Using a compute node for a task that requires graphics involves a little bit of trickery, but is really not that difficult.

Getting use of a compute node

Obtain exclusive use of a compute node by logging into Lengau according to your usual method, and obtaining an interactive session:

qsub -I -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00

Obviously replace MECH1234 with the shortname of your particular Research Programme. Note down the name of the compute node that you have been given, let us use cnode0123 for this example. You can also use an interactive session like this to perform “service” tasks, such as archiving or compressing data files, which will be killed when attempted on the login node.

Getting a graphics-capable session on a compute node

There are two ways of doing this:

  • X-forwarding by means of a VNC session
  • X-forwarding in two stages

X-forwarding in two stages is really only a practical proposition if you are on a fast, low-latency connection into the Sanren network. Otherwise, get the VNC session first by following these instructions.

Double X-forwarding

From an X-windows capable workstation (in other words, from a Linux terminal command prompt, or an emulator on Windows that includes an X-server, such as MobaXterm), log in to Lengau:

 ssh -X jblogs@lengau.chpc.ac.za 

Once logged in, do a second X-forwarding login to your assigned compute node:

 ssh -X cnode0123 
X-forwarding from the VNC session

A normal broadband connection will probably be too slow to use the double X-forwarding method. In this case, first get the VNC desktop going, as described above, and open a terminal. From this terminal, log in to your assigned compute node:

 ssh -X cnode0123 

Licensing

The compute nodes do not have direct internet access, so it is necessary to tunnel through chpclic1 to establish contact with the license server, in this case we will use the Power on Demand license servers of the company formerly known as CD-Adapco. Use your own Lengau login ID, unless you happen to be Joe Blogs.

ssh -f jblogs@chpclic1 -L 1999:flex.cd-adapco.com:1999 -N
ssh -f jblogs@chpclic1 -L 2099:flex.cd-adapco.com:2099 -N
export LM_LICENSE_FILE=1999@localhost
export CDLMD_LICENSE_FILE=1999@localhost

Set up the path

Obviously set the path to the version that you are using:

export PATH=/apps/chpc/compmech/CFD/CD-adapco/12.04.010-R8/STAR-CCM+12.04.010-R8/star/bin:$PATH

Run starccm+

You can now simply start the program in the usual way, with the command

 starccm+ 

Do the usual things you need to in order to access your PoD key. Thanks to the magic of Mesa, you have access to the GUI and graphics capability of the interface.

Kill the ssh-tunnels

When the process has been completed, find the process ID's of the two ssh-tunnels

 ps ax | grep ssh 

and kill them with a command like

 kill -9 12345 12346 

assuming that 12345 and 12346 are the process ID's of the two tunnels.

Alternatively, use the following command to find the find a tunnel's process ID and kill it, issue once per tunnel:

kill -9 `ps ux | grep "ssh -f" |  grep -o -E '[0-9]+' | head -1 | sed -e 's/^0\+//' `
/var/www/wiki/data/pages/howto/starccm.txt · Last modified: 2018/02/14 14:42 by ccrosby