User Tools

Site Tools


howto:singularity

Singularity in a Nutshell

Singularity is a container solution created for scientific and application-driven workloads. This solution provides users with full control of their environments for developing software, libraries and scientific workflows. The other benefit is that it would allow users already invested in Docker containers to easily import their images without the need to install and configure Docker on HPC clusters. These benefits can be summarised into 4 points as follows:

  • Mobility of Compute: Singularity utilises a distributable image format that contains the entire container and stack into a single file. This file can be copied, shared, archived, and standard UNIX file permissions also apply
  • Reproducibility: A Singularity image can be archived and locked down to sure that when used later the code within the container remains the same.
  • User Freedom: Singularity can give the user the freedom they need to install the applications, versions, and dependencies for their workflows without impacting the system in any way.
  • Support on Existing Traditional HPC: Singularity natively supports InfiniBand, Lustre, and works seamlessly with all resource managers (e.g. SLURM, Torque, SGE, etc.) because it works like running any other command on the system. It also has built-in support for MPI and for containers that need to leverage GPU resources.

The diagram below is a visual depiction of the Singularity work-flow process. This process can be categorised into two main environments:

  • Build environment: for testing and building containers (user needs sudo privileges). This would ideally be the user's workstation or virtual machine.
  • Production environment where users run containers. E.g (Lengau Cluster)

For more details on how to use Singularity please read the user guide: https://www.sylabs.io/guides/2.6/user-guide/index.html

Using Singularity on Lengau

Based on the image above, it is clear that the CHPC Lengau cluster will only provide a production (container execution) environment for the Singularity workflow. Therefore, users are only allowed to run already built containers images on this platform.

NB: Please note that the computes nodes on Lengau do not have access to the internet, which means that users will not be able to use the Singularity pull command. However, users can use the scp.chpc.ac.za node to facilitate the transfer of images to and from the platform.

Usage Overview

Using Singularity on Lengau can be summarised as a two-stage process:

  • Transferring images to the Lengau cluster:
    • Users can use cp, SCP, rsync or sFTP to copy their images on the cluster using the node: scp.chpc.ac.za.
    • They can also use the Singularity pull command on the same node.
  • Container Execution
    • Users can only interact with the computes nodes through the scheduler using the qsub command.
    • Users also need to load the chpc/singularity module to enable execution of Singularity commands (exec, run & shell).

Example: MPI Hello World

Building an image

Below is an example of a recipe file for building an Ubuntu:18.04 image. Inside this image, we install MPICH-3.2.1 and compile code for a mpi_hello_world program.

BootStrap: docker
From: ubuntu:18.04
%post
    apt -y update && apt-get -y upgrade
    apt -y install wget vim build-essential gfortran unzip lsb-core
    echo -e "Compiling mpich-3.2.1"
    cd /tmp
    if [ ! -f /tmp/mpich-3.2.1.tar.gz ]; then
        wget http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz
    fi
    tar -xf mpich-3.2.1.tar.gz
    cd mpich-3.2.1
     ./configure --prefix=/usr/local
    make -j 4
    make install
    mpicc examples/hellow.c -o /usr/local/bin/hellow
    cd /tmp
     rm -rf /tmp/mpich-3.2.1
    echo -e "Install another Hello work"
    cd /tmp
    if [ ! -f /tmp/gh-pages.zip ]; then
       wget https://github.com/wesleykendall/mpitutorial/archive/gh-pages.zip
    fi
    unzip gh-pages.zip
    cd  mpitutorial-gh-pages/tutorials/mpi-hello-world/code/
    make clean
    make
    cp mpi_hello_world /usr/local/bin/
    cd /tmp/
    rm -rf /tmp/mpitutorial-gh-pages
 %runscript
    echo -e "This is a container image running"
    lsb_release -a
    # This is the mpi_hello_world code
    echo -e "\n Running mpi_hello_world code\n"
    exec mpi_hello_world

Using the definition file above we can build a Singularity image on a system with singularity installed and we have sudo privileges on (i.e personal work station or VM). Use the command below to build the image:

sudo singularity build $image_name $definition_file

where $image_name is the name of the image your building and $definition_file with the filename for code provided above.

Image transfer to Lengau

Once we've built our image we can then transfer it to the Lengau cluster.

rsync -azP $image_name $user@lengau.chpc.ac.za:$DIR_PATH

where $DIR_PATH is the location you want to store your image.

Running Images on Lengau

Login to the cluster

ssh $user@lengau.chpc.ca.za

Submit your job using the qsub command and the resources you need

qsub -I -P $PROJECT -q $QUEUE -l select=2:ncpus=24:mpiprocs=4:nodetype=haswell_reg

where $PROJECT is the project name to draw you allocation and $QUEUE is the name of the Queue you want to run on.

Then load the Singularity and MPI module available on the cluster

module load chpc/singularity chcp/mvapich....

After loading the module are then able to exec the code from the container using Singularity commands. Below is the output showing that the container is running on Ubuntu 18.04 and it executed a single thread of the mpi_hello_world code.

 $ singularity run ubuntu-mpich-3.2.1.simg-v4 mpi_hello_world
 -e This is a container image running
 LSB Version: core-9.20170808ubuntu1-noarch:security-9.20170808ubuntu1-noarch
 Distributor ID:      Ubuntu
 Description: Ubuntu 18.04.2 LTS
 Release:     18.04
 Codename:    bionic
 -e
 Running hellow code
 Hello world from process 0 of 1
 

The following execution shows how to launch the mpi_hello_world code from the container on the cluster.

$ mpirun -np 24 singularity exec mpich-3.2.1.simg-v2 mpi_hello_world
  Hello world from processor cnode0241, rank 4 out of 24 processors
  Hello world from processor cnode0241, rank 15 out of 24 processors
  Hello world from processor cnode0241, rank 5 out of 24 processors
  Hello world from processor cnode0241, rank 6 out of 24 processors
  Hello world from processor cnode0241, rank 20 out of 24 processors
  Hello world from processor cnode0241, rank 21 out of 24 processors
  Hello world from processor cnode0241, rank 22 out of 24 processors
  Hello world from processor cnode0241, rank 13 out of 24 processors
  Hello world from processor cnode0241, rank 7 out of 24 processors
  Hello world from processor cnode0241, rank 14 out of 24 processors
  Hello world from processor cnode0241, rank 12 out of 24 processors
  Hello world from processor cnode0241, rank 23 out of 24 processors
  Hello world from processor cnode0215, rank 9 out of 24 processors
  Hello world from processor cnode0215, rank 19 out of 24 processors
  Hello world from processor cnode0215, rank 8 out of 24 processors
  Hello world from processor cnode0215, rank 0 out of 24 processors
  Hello world from processor cnode0215, rank 16 out of 24 processors
  Hello world from processor cnode0215, rank 3 out of 24 processors
  Hello world from processor cnode0215, rank 11 out of 24 processors
  Hello world from processor cnode0215, rank 18 out of 24 processors
  Hello world from processor cnode0215, rank 2 out of 24 processors
  Hello world from processor cnode0215, rank 10 out of 24 processors
  Hello world from processor cnode0215, rank 1 out of 24 processors
  Hello world from processor cnode0215, rank 17 out of 24 processors
/app/dokuwiki/data/pages/howto/singularity.txt · Last modified: 2022/06/24 10:56 by ischeepers