User Tools

Site Tools


howto:cfd-ace

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
howto:cfd-ace [2015/10/14 12:21]
ccrosby
howto:cfd-ace [2016/06/02 15:57]
ccrosby
Line 1: Line 1:
 ====== ESI CFD-Ace ====== ====== ESI CFD-Ace ======
  
-The CHPC has an installation of the CFD-Ace suite of software, including Ace, Fastran, Geom, View and VisCART, but with no license. ​ Approved academic users can use the software on the CHPC cluster, but need to work off their own licenses. ​ Version ​2013.0 is currently available on the system. ​ If you need an older version, please contact CHPC support through the CHPC website.+The CHPC has an installation of the CFD-Ace suite of software, including Ace, Geom, View and VisCART, but with no license. ​ Approved academic users can use the software on the CHPC cluster, but need to work off their own licenses. ​ Version ​2015.0 is currently available on the system. ​ If you need an older version, please contact CHPC support through the CHPC website.
  
  
 ===== Installation ===== ===== Installation =====
  
-The ESI-CFD suite is installed under '''/​opt/gridware/non-supported/​ESI'''​+The ESI-CFD suite is installed under '''/​apps/chpc/compmech/​CFD/​ESI'''​
  
  
 ===== Licensing ===== ===== Licensing =====
  
-Because the cluster compute nodes do not have direct access to the internet, it is necessary to use ssh-tunneling through another node to contact the user's license server, which will need to have the license manager and vendor daemon ports open.  Tunneling is done via a separate node, chpcsp02, that is not by default accessible to CHPC users.  Please apply to  [[http://​www.chpc.ac.za/​index.php/​support-resources/​log-a-support-query|CHPC Support]] ​in order to be given access to chpcsp02. ​ State clearly in your application that you need this access for tunneling to allow use of an off-site license server. ​ You will also need to supply the public IP address and port numbers of your license server ​so that the CHPC firewall can be configured to allow the necessary traffic. ​ You will need to take similar firewall configuration measures at your end.+Because the cluster compute nodes do not have direct access to the internet, it is necessary to use ssh-tunneling through another node to contact the user's license server, which will need to have the license manager and vendor daemon ports open.  Tunneling is done via a separate node, chpcslic1.  Please apply to  [[http://​www.chpc.ac.za/​index.php/​support-resources/​log-a-support-query|CHPC Support]] so that the CHPC firewall can be configured to allow the necessary traffic. ​ You will need to take similar firewall configuration measures at your end.
  
  
 ===== Running a CFD-Ace Job ===== ===== Running a CFD-Ace Job =====
  
-On the CHPC clusters all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue and machine.  CFD-Ace runs on the three Intel Xeon based clusters: Nehalem, Westmere and Dell.  Unlike other software packages, CFD-Ace is not compatible with the parallel Lustre scratch file system. ​ Use your home directory instead.+On the CHPC clusters all simulations are submitted as jobs to the PBS Pro job scheduler which will assign your job to the appropriate queue and machine. ​ Unlike other software packages, CFD-Ace is not compatible with the parallel Lustre scratch file system. ​ Use your home directory instead.
  
 Example job script: Example job script:
Line 22: Line 22:
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
-##### The following line will request ​(virtual) nodes, each with 12 cores running ​12 mpi processes  +##### The following line will request ​10 (virtual) nodes, each with 24 cores running ​24 mpi processes  
-##### a total of 24-way parallel. ​jobtype can be dell, westmere or nehalem +##### a total of 24-way parallel. 
-#PBS -l select=2:ncpus=12:mpiprocs=12:mem=4GB:​jobtype=dell,​place=free:​group=nodetype +#PBS -l select=10:ncpus=24:mpiprocs=24:mem=4GB 
-#PBS -q workq+#PBS -q normal 
 +##### Supply YOUR resource programme code in the next line 
 +#PBS -P MECH0000
 #PBS -l walltime=1:​00:​00 #PBS -l walltime=1:​00:​00
 #### CFD-Ace will not work on scratch, work in home directory: #### CFD-Ace will not work on scratch, work in home directory:
-#PBS -o /export/​home/​username/​acetesting/​ace.out +#PBS -o /​home/​username/​acetesting/​ace.out 
-#PBS -e /export/​home/​username/​acetesting/​ace.err+#PBS -e /​home/​username/​acetesting/​ace.err
 ##### The following two lines will send the user an e-mail when the job aborts, begins or ends. ##### The following two lines will send the user an e-mail when the job aborts, begins or ends.
 #PBS -m abe #PBS -m abe
Line 35: Line 37:
 ##### Set up path.  To ensure that the compute nodes also have access to this path,  ##### Set up path.  To ensure that the compute nodes also have access to this path, 
 #####  also add these lines to your .bashrc file #####  also add these lines to your .bashrc file
-export ESI_HOME=/opt/gridware/non-supported/ESI +export ESI_HOME=/apps/chpc/compmech/​CFD/ESI 
-export PATH=$ESI_HOME/​2013.0/​UTILS/​bin:​$PATH +export PATH=$ESI_HOME/​2015.0/​UTILS/​bin:​$PATH 
-export LD_LIBRARY_PATH=$ESI_HOME/​2013.0/​UTILS/​lib:​$LD_LIBRARY_PATH+export LD_LIBRARY_PATH=$ESI_HOME/​2015.0/​UTILS/​lib:​$LD_LIBRARY_PATH
 ##### set up ssh-tunnels to your license server. Obviously use the right port numbers. And server URL. ##### set up ssh-tunnels to your license server. Obviously use the right port numbers. And server URL.
 #### lmgrd daemon #### lmgrd daemon
-ssh -f username@chpcsp02 ​-L 1999:​licenseserver.ac.za:​1999 -N+ssh -f username@chpclic1 ​-L 1999:​licenseserver.ac.za:​1999 -N
 #### vendor daemon port #### vendor daemon port
-ssh -f username@chpcsp02 ​-L 1998:​licenseserver.ac.za:​1998 -N+ssh -f username@chpclic1 ​-L 1998:​licenseserver.ac.za:​1998 -N
 #### Tell solver where to look for the license.  ​ #### Tell solver where to look for the license.  ​
 ####  localhost is correct here, it follows from the ssh-tunneling ####  localhost is correct here, it follows from the ssh-tunneling
Line 49: Line 51:
 #### explicitly set working directory and change to that. #### explicitly set working directory and change to that.
 #### CFD-Ace will not work on scratch, work in home directory. #### CFD-Ace will not work on scratch, work in home directory.
-export PBS_JOBDIR=/export/​home/​username/​acetesting+export PBS_JOBDIR=/​home/​username/​acetesting
 cd $PBS_JOBDIR cd $PBS_JOBDIR
 nproc=`cat $PBS_NODEFILE | wc -l` nproc=`cat $PBS_NODEFILE | wc -l`
Line 60: Line 62:
 ===== Monitoring a CFD-Ace Job ===== ===== Monitoring a CFD-Ace Job =====
  
-There are different ways of monitoring your solution. ​ Please refer to the page on [[howto:​remote_viz|Remote Visualization]] for instructions on how to get a VNC session on the visualization server. ​ Add the path to the VirtualGL installation with the environment setting: '''​export PATH=$PATH:/​opt/​VirtualGL/​bin'''​. The vglrun wrapper is required to ensure that OpenGL works in the VNC session. ​ In order to access your license, set up the environment (with the bash '''​export'''​ statements as per your job submission script), and create the same SSH-tunnels as per the job submission script. ​ The command '''​vglrun CFD-ACE-GUI'''​ should allow you to open the GUI. Once you have the GUI open, you can open the .RSL and .MON files to monitor progress. ​ Alternatively,​ you can use gnuplot to plot the contents of these files. ​ In the absence of a graphics-capable environment,​ it is still practical to plot these files with gnuplot, using old-fashioned but very funky ASCII graphics. ​ In the gnuplot interface, simply give the command '''​set terminal dumb 120 40''',​ where the numbers specify the size of the character terminal. Use vglrun to open CFD-VIEW for graphical post-processing and image creation.+There are different ways of monitoring your solution. ​ Please refer to the page on [[howto:​remote_viz|Remote Visualization]] for instructions on how to get a VNC session on the visualization server. ​ Add the path to the VirtualGL installation with the environment setting: '''​export PATH=$PATH:/​opt/​VirtualGL/​bin'''​. The vglrun wrapper is required to ensure that OpenGL works in the VNC session. ​Alternatively,​ as per the remote viz instructions,​ simply start your X-session with the vglrun command, after which it will not be necessary to use it again for OpenGL apps. In order to access your license, set up the environment (with the bash '''​export'''​ statements as per your job submission script), and create the same SSH-tunnels as per the job submission script. ​ The command '''​vglrun CFD-ACE-GUI'''​ should allow you to open the GUI. Once you have the GUI open, you can open the .RSL and .MON files to monitor progress. ​ Alternatively,​ you can use gnuplot to plot the contents of these files. ​ In the absence of a graphics-capable environment,​ it is still practical to plot these files with gnuplot, using old-fashioned but very funky ASCII graphics. ​ In the gnuplot interface, simply give the command '''​set terminal dumb 120 40''',​ where the numbers specify the size of the character terminal. Use vglrun to open CFD-VIEW for graphical post-processing and image creation.
  
  
/var/www/wiki/data/pages/howto/cfd-ace.txt · Last modified: 2017/09/11 13:59 by ccrosby