File name: USING CHPC SUN FUSION
File release date: 29 January 2013
File version: 6.00
Author: Technical Team
Email: helpdesk @ chpc.ac.za
Tel: +27 21 658 2740/58/60
The SUN Fusion System Linux cluster, named Tsessebe, was made possible by the Department of Science and Technology. Tsessebe entered formal production in September 2009 and supports high-end computational science research throughout South Africa.
The Tsessebe system is comprised of different architectures, that is, the Harpertown, Nehalem, Westmere and SPARC (M9000-64). It has 672 compute nodes giving a total of 6720 cores excluding the SPARC M9000 which has 256 cores with 2 threads per core giving a total of 512 threads. All these architectures are interconnected using the InfiniBand technology and share 480 TB of storage which uses Lustre as the filesystem. It has a LinPack performance of 61.5 TFlops. An overview of the Tsessebe system is shown in figure 1-1.
|Cluster|| ||Nodes||CPU||Speed|| ||Performance||RAM||OS||cnodes|
|HARPERTOWN|| ||48||2x Xeon||3.0 GHz||8||3 TFlops||16 GB||Redhat 5.8||1|
|NEHALEM|| ||288||2x Xeon||2.93 GHz||8||24 Tflops||12 GB||Centos 5.8||2,3,4|
|WESTMERE|| ||96||2x Xeon||2.93 GHz||12||13.5 TFlops||24 GB||Centos 5.8||5|
|DELL WESTMERE|| ||240||2x Xeon||2.93 GHz||12||37.1 TFlops||36 GB||Centos 5.8||6,7,8,9|
|SUN SPARC M9000|| ||1||Sparcv9||2.5 GHz||512||2 TFlops||2 TB||Solaris 10||m9000|
|VISUALISATION SERVER|| ||1||4x AMD Opteron||2.3 GHz||16||64 GB||Redhat 5.1||viz01|
*ppn refers to number of cores per node
The cluster as a whole has a measured Linpack performance of 61.5 TFlops.
Tsessebe's shared storage is hosted on 10 Sun x4540 disk servers, each containing 48 integrated SATA drives giving a total of 480 TB of shared storage with Lustre as a filesystem.
Tsessebe has several different filesystems with distinct storage characteristics. There are predefined directories in these filesystems for you to store your data. Since these filesystems are shared with others, they are managed either by a quota limit or a purge policy.
We have the NFS and the Lustre filesystems which are HOME and SCRATCH respectively. The HOME directory has a 5GB quota, backed up filesystem. SCRATCH is periodically purged and not backed up, and has a very large, 450TB quota. All filesystems also impose an inode limit, which affects the number of files allowed.
/export/home/usernamewhere username is replaced by your login name).
cd /export/home/username/scratchto change to SCRATCH.
helpdesk (at) chpc.ac.za
NOTE: CHPC staff may delete files from scratch if the scratch filesystem becomes full, even if files are less than 90 days old. Users are advised to remove their data after a successful run completion. A full filesystem inhibits use of the filesystem for everyone. The use of programs or scripts to actively circumvent the file purge policy will not be tolerated.
To determine the amount of disk space used in a filesystem,
cd to the directory of interest and execute the following command:
df -h .
It is important to include the dot which represents the current directory. Without the dot, all filesystems are reported.
In the example command output below, the filesystem name appears on the left, and the used and available space (the
--human-readable, option prints sizes in the more useful units of G for gigabyte, etc.) appear in the middle columns followed by the percentage used and the mount point:
login01$ df -h . Filesystem Size Used Avail Use% Mounted on 172.17.203.15:/mnt/home 1.9T 1.4T 373G 79% /export/home
To determine the amount of space occupied in a user-owned directory,
cd to the directory and execute the
du -h .
To find the largest sub directory in the current directory you can use a command line pipe like this:
du -s * | sort -n
Many of the LINUX commands and tools, such as the compilers, debuggers, profilers, editors, look in the environment for variables that specify information they may need to access. To see the variables in your environment execute the
The variables are listed as keyword/value pairs separated by an equal
= sign, as illustrated below by the HOME and PATH variables.
Notice that the PATH environment variable consists of a colon (:) separated list of directories. Variables exported in the environment are “carried over” to the environment of shell scripts and new shell invocations provided they are “exported” by the
YEAR=1970 export YEAR
Normal shell variables (created with just an assignment) are useful only in the present shell.
Only environment variables are displayed by the
printenv) command. Execute
set to see the (normal) shell variables.
To access the value (ie, contents) of an environment variable add a
$ dollar symbol to the name. For example
will simply display
A collection of program libraries and software packages are supported on Tsessebe. These software products for the supercomputing environment have been selected on the basis of quality, history of performance, system compatibility, and benefit to the scientific community. An organized and customizable listing of all packages (name, version, etc.) as well as execution/loading information is available in the Software and Tools Table. The same information for individual packages can be found in the module files on the machine through the execution of the module help command (module help <module_name>).
CHPC continually updates application packages, compilers, communications libraries, tools, and math libraries. To facilitate this task and to provide a uniform mechanism for accessing different revisions of software, CHPC uses the modules utility, which manipulates your environment.
At login, modules commands set up a basic environment for the default compilers, tools, and libraries. For example: the $PATH, $MANPATH, $LIBPATH environment variables, directory locations and license paths. Therefore, there is no need for you to set them or update them when updates are made to system and application software. Each of the major CHPC applications has a modulefile that sets, unsets, appends to, or prepends to environment variables such as $PATH, $LD_LIBRARY_PATH, $INCLUDE, $MANPATH for the specific application. Each modulefile also sets functions or aliases for use with the application. You need only to invoke a single command to configure the application/programming environment properly. The general format of this command is:
login01$ module load <module_name>
where <module_name> is the name of the module to load. You can add or change the modulepath in your .profile or by setting the environment variable: eg. export MODULEPATH=/opt/gridware/bioinformatics/modules:$MODULEPATH. Most of the package directories are in /opt/gridware/application; /opt/gridware/compilers and /opt/gridware/libraries. They are named after the package. In each package directory there are subdirectories that contain the specific versions of the package. As an example, the Intel 12.1 package requires several environment variables that point to its home, libraries, include files, and documentation. These can be set up by loading the intel2012 module:
login01$ module load intel2012
To see a list of available modules, a synopsis of a particular modulefile's operations (in this case, intel2012), and a list of currently loaded modules, execute the following commands:
login01$ module avail login01$ module help intel2012 login01$ module list
During upgrades, new modulefiles are created to reflect the changes made to the environment variables. CHPC will always announce upgrades and module changes in advance.
Compilers and libraries that CHPC support are available from the following folders:
To use our libraries , compiliers and some applications you will have to load them as modules.
Here is an example on how to use an amber at CHPC :
module list ### this will show you modules that are currently loaded in your environment module avail ### this will show you modules that are available for the cluster module add intel ### loading an intel 2011 module module add amber ### loading amber2012 with ambertools module
You can also use Intel compilers by adding the following modules:
or sunstudio and gnu
|compiled with GNU compiler version 4.XXX|
|compiled with Intel compiler version 12.1|
|ImageMagick||6.7.9||/opt/gridware/compilers||with intel 2012|
|NCO||4.2.1||/opt/gridware/compilers||with gcc-4.5.1 , intel 11 and openmpi-1.4.2-intel|
|netcdf-intel||4.1.2||/opt/gridware/libraries||with intel 2012|
|Mvapich2 (r5668)||1.8||/opt/gridware/libraries||with intel 2012|
|HDF5||1.8.9||/opt/gridware/compilers||with intel 11.1|
|OpenMPI||1.6.1||/opt/gridware/compilers/OpenMPI||with intel 2012|
|FFTW||3.3.2||/opt/gridware/libraries||with intel 2012 , using mvapich2(r5668) mpi lib|
|FFTW||2.1.5||/opt/gridware/libraries||with intel 2012 , using mvapich2(r5668) mpi lib|
Applications like gaussian and amber have their own modules.
Note that CHPC only support applications that are installed in
We discourage users from installing any applications on their home directories. Users can install their own applications in /opt/gridware/users/ which will also not be supported by CHPC.
To connect to the Sun systems (eg. M9000, Nehalem cluster)
sun.chpc.ac.za and log in using the username and password sent to you by the CHPC.
If you wish to use the IBM system (eg. the Blue Gene/P) use
ssh to go to
CHPC offer 450TB shared temporary lustre storage for users to store their output files. This is a temporary storage, users are advised to remove their data after each successful run. Data will be purged when the storage approaches 80%. Read CHPC Storage Policy: http://www.chpc.ac.za/use-policy
The home directory
/export/home/ is backed up daily. Only files that need to be backed up should be stored here. Only max 5GB data per user should be stored in this directory. Applications installed on this directories will not be supported.
If there are any request for storage please send it to
|ImageMagick||6.7.9||with Intel 2012|
|NCO||4.2.1||with gcc-4.5.1, intel 11 and openmpi-1.4.2-intel|
|netcdf-intel||4.1.2||with intel 2012|
|mvapich2||1.8||with intel 2012|
|HDF5||1.8.9||with intel 11.1|
|OpenMPI||1.6.1||with intel 2012|
|FFTW||3.3.2||with intel 2012, using mvapich2 (r5668) mpi lib|
|FFTW||2.1.5||with intel 2012, using mvapich2 (r5668) mpi lib|
XXX WHAT ARE THE MODULE NAMES FOR THESE???
XXX WHERE IS BLAS, ATLAS, LAPACK ???
The Bash Unix shell executes the commands in the ~/.bashrc file every bash is launched (i.e. when you login). This file is often used to setup the environmental variables to better suits the specific users needs, for example if your codes are going to be using python2.6.5, openmpi-1.4.2 and gcc-4.3, you can save yourself some type by adding the following line to ~/.bashrc file,
# Python 2.6.5 export PATH=/opt/gridware/python2.6.5/bin:$PATH export LD_LIBRARY_PATH=/opt/gridware/python2.6.5/lib/ # gridware general librarys export LD_LIBRARY_PATH=/opt/gridware/lib/:$LD_LIBRARY_PATH # openMPI 1.4.2 export PATH=/opt/gridware/openmpi-1.4.2-gnu/bin:$PATH export LD_LIBRARY_PATH=/opt/gridware/openmpi-1.4.2-gnu/lib:$LD_LIBRARY_PATH # Your Home Directory export LD_LIBRARY_PATH=~/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=~/lib64:$LD_LIBRARY_PATH export PATH=~/bin:$PATH # GCC 4.3.4 export PATH=/opt/gridware/gcc-4.3/bin:$PATH export LD_LIBRARY_PATH=/opt/gridware/gcc-4.3/lib/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/opt/gridware/gcc-4.3/lib64/:$LD_LIBRARY_PATH
The next time you login, the Bash environmental variables will be updated. In the case of the above example, the result would instead of having to type $/opt/gridware/python2.6.5/bin/python , the user can simply
$ python --version Python 2.6.5
NB After the August 2012 system upgrades, .bashrc is not executed for interactive shells, only .profile is. As such the following line needs to be added to the end of .profile file,