User Tools

Site Tools


quick:new

This is an old revision of the document!


README

File name: USING CHPC SUN FUSION

File realease date: 29 January 2013

File version: 6.00

Author: Technical Team

Email: helpdesk@chpc.ac.za

Website: www.chpc.ac.za

Contact: 021 658 2740/58/60

Compilers, libraries and modules

Compilers and libraries that CHPC support are available from the following folders:

  • compilers: /opt/gridware/compilers
  • libraries: /opt/gridware/libraries

To use our libraries , compiliers and some applications you will have to load them as modules.

Here is an example on how to use an amber at CHPC :

module list           ### this will show you modules that are currently loaded in your environment
module avail          ### this will show you modules that are available for the cluster
module add intel      ### loading an intel 2011 module
module add amber      ### loading amber2012 with ambertools module

You can also use Intel compilers by adding the following modules:

module add intel Intel 11.1
module add inteltools Intel 12.0
module add intel2012 Intel 12.1

or sunstudio and gnu

module add sunstudio XXX comment?
module add gcc/version version=4.6.3 or 4.7.2
module add clustertools XXX comment?
module add openmpi/openmpi-1.6.1-gnu compiled with GNU compiler version 4.XXX
module add openmpi/openmpi-1.6.1-intel compiled with Intel compiler version 12.1
Code Name Version Directory Notes
gcc 4.5.1 /opt/gridware/compilers with-gmp
zlib 1.2.7 /opt/gridware/compilers with gcc
ImageMagick 6.7.9 /opt/gridware/compilers with intel 2012
NCO 4.2.1 /opt/gridware/compilers with gcc-4.5.1 , intel 11 and openmpi-1.4.2-intel
netcdf-gnu 4.1.2 /opt/gridware/libraries with gcc 
netcdf-intel 4.1.2 /opt/gridware/libraries with intel 2012
Mvapich2 (r5668) 1.8 /opt/gridware/libraries with intel 2012
mvapich 2.1.8 /opt/gridware/libraries with gcc
HDF5 1.8.9 /opt/gridware/compilers with intel 11.1
OpenMPI 1.6.1 /opt/gridware/compilers/OpenMPI with intel 2012
OpenMPI 1.6.1 /opt/gridware/compilers/OpenMPI with gcc
FFTW 3.3.2 /opt/gridware/libraries with intel 2012 , using mvapich2(r5668) mpi lib
FFTW 2.1.5 /opt/gridware/libraries with intel 2012 , using mvapich2(r5668) mpi lib

Applications like gaussian and amber have their own modules.

Note that CHPC only support applications that are installed in /opt/gridware/applications

We discourage users from installing any applications on their home directories. Users can install their own applications in /opt/gridware/users/ which will also not be supported by CHPC.

Logging in

To connect to the Sun systems (eg. M9000, Nehalem cluster) ssh to sun.chpc.ac.za and log in using the username and password sent to you by the CHPC.

If you wish to use the IBM system (eg. the Blue Gene/P) use ssh to go to ssh.chpc.ac.za instead.

more...

SUN FUSION INFRASTRUSTURE

Cluster Nodes CPU Speed ppn RAM OS
HARPERTOWN 482x Xeon 3.0 Ghz 8 16GB Redhat 5.8
NEHALEM 2882x Xeon2.93Ghz 8 12GB Centos 5.8
WESTMERE 962x Xeon2.93Ghz 12 24GB Centos 5.8
DELL WESTMERE 2402x Xeon2.93Ghz 12 36GB Centos 5.8
SUN SPARC M9000 1Sparcv92.5Ghz 512 2TB Solaris 10
VISUALISATION SERVER 1 4x AMD Opteron2.3Ghz 16 64GB Redhat 5.1

*ppn Refers to number of cores per node

XXX Interconnect???

XXX Other Info???

SUBMITTING A JOB USING MOAB

Note that more information on how to use Moab and Moab flags you can visit http://www.clusterresources.com/

test.job example:

XXX explain that user needs to create a plain text file for the job script XXX

#!bin/bash
#MSUB -l nodes=1:ppn=12
#MSUB -l walltime=2:00:00
#MSUB -l feature=dell|westmere
#MSUB -m be
#MSUB -V
#MSUB -o /lustre/SCRATCH2/users/username/file.out
#MSUB -e /lustre/SCRATCH2/users/username/file.err
#MSUB -d /lustre/SCRATCH2/users/username/
#MSUB -mb
 

##### Running commands

nproc=`cat $PBS_NODEFILE | wc -l`
mpirun -np $ncproc -machinefile $PBS_NODEFILE <executable> <innput>

submit test:

msub test.job

You can view your job using the following :

checkjob -v -v jobid

STORAGE

CHPC offer 450TB shared temporary lustre storage for users to store their output files. This is a temporary storage, users are advised to remove their data after each successful run. Data will be purged when the storage approaches 80%. Read CHPC Storage Policy: http://www.chpc.ac.za/use-policy

The home directory /export/home/ is backed up daily. Only files that need to be backed up should be stored here. Only max 5GB data per user should be stored in this directory. Applications installed on this directories will not be supported.

If there are any request for storage please send it to helpdesk@chpc.ac.za

Code Name Version Module Directory Notes
gcc 4.5.1 gcc /opt/gridware/compilers with-gmp
zlib 1.2.7 ??? /opt/gridware/compilers with gcc
ImageMagick 6.7.9 ??? /opt/gridware/compilers with Intel 2012
NCO 4.2.1 ??? /opt/gridware/compilers with gcc-4.5.1, intel 11 and openmpi-1.4.2-intel
netcdf-gnu 4.1.2 ??? /opt/gridware/libraries with gcc
netcdf-intel 4.1.2 ??? /opt/gridware/libraries with intel 2012
mvapich2 1.8 ??? /opt/gridware/libraries with intel 2012
mvapich 2.1.8 ??? /opt/gridware/libraries with gcc
HDF5 1.8.9 ??? /opt/gridware/compilers with intel 11.1
OpenMPI 1.6.1 ??? /opt/gridware/compilers/OpenMPI with intel 2012
OpenMPI 1.6.1 ??? /opt/gridware/compilers/OpenMPI with gcc
FFTW 3.3.2 ??? /opt/gridware/libraries with intel 2012, using mvapich2 (r5668) mpi lib
FFTW 2.1.5 ??? /opt/gridware/libraries with intel 2012, using mvapich2 (r5668) mpi lib

XXX WHAT ARE THE MODULE NAMES FOR THESE???

XXX WHERE IS BLAS, ATLAS, LAPACK ???

/var/www/wiki/data/attic/quick/new.1360739545.txt.gz · Last modified: 2013/02/13 09:12 by andyr