User Tools

Site Tools


howto:paraview

Running Paraview

Video tutorial - Parallel ParaView in client / server mode

Watch this video tutorial

Alternative method - ParaView using GPUs in the cluster

Introduction

Paraview is an extremely capable open-source programme for visualising simulation results. It supports a wide variety of file formats and is used in many different scientific and engineering disciplines. The code is designed to make effective use of parallel processing in order to process extremely large datasets. There are many recent versions installed on Lengau and they can be used in a multitude of different ways. The recommended method involves running a Paraview server in parallel on one or more compute nodes and using the Paraview graphics client on your own workstation to connect to that server by way of an ssh tunnel. The method is described here

There are several Paraview installations:

  1. A standard binary distribution Paraview-4.3.1 in /apps/chpc/compmech/ParaView-4.3.1-Linux-64bit/bin
  2. A custom compiled Paraview-5.0.1 in apps/chpc/compmech/CFD/Paraview/bin
  3. A standard binary distribution Paraview-5.2.0 in /apps/chpc/compmech/ParaView-5.2.0-Qt4-OpenGL2-MPI-Linux-64bit
  4. A standard binary distribution Paraview-5.3.0 in /apps/chpc/compmech/ParaView-5.3.0_bininstall
  5. A standard binary distribution Paraview-5.4.1 in /apps/chpc/compmech/ParaView-5.4.1-Qt5-OpenGL-MPI-Linux-64bit
  6. A standard binary distribution Paraview-5.5.0 in /apps/chpc/compmech/ParaView-5.5.0-Qt5-MPI-Linux-64bit
  7. A standard binary distribution Paraview-5.8.1 in /apps/chpc/compmech/ParaView-5.8.0-MPI-Linux-Python3.7-64bit
  8. A custom-compiled server version of Paraview-5.8.1, accessible by way of the module chpc/compmech/Paraview/5.8.1-osmesa
  9. A standard binary server version of ParaView-5.12.1, accessible by way of the module chpc/compmech/Paraview/5.12.1-osmesa

These are also available as modules, please use module avail. There are two modules for each of Paraview-5.3.0, 5.4.1, 5.5.0, and 5.8.1, the second one being appended with -VNC. Use this for running Paraview with the Nvidia graphics card on a viz node. Please refer to the Paraview web site for instructions on using Paraview. For interactive graphics on a virtual desktop, an OpenGL-enabled environment will be required, therefore refer to the page on Remote Visualization for instructions on setting up a connection to the visualization node, and running Paraview with the VirtualGL wrapper. Paraview is an extremely versatile post-processor, and can be used as the primary visualization tool for a very wide range of applications and file formats, including OpenFOAM, MFix, SU2 and many others.

Paraview client in a VNC session

For many small to moderate visualization tasks, the user may wish to run the Paraview client directly on the cluster, in a VNC session. This can be done easily on one of the visualization nodes chpcviz1 or chpclic1. However, due to high levels of use on these nodes, or a demanding data set, the user may prefer to use a VNC session on a compute node. This process is documented and straightforward. However, this relies on persuading the Paraview client to work properly with the Mesa software rendering libraries. and this procedure fails with the later Paraview versions. The problem is an incompatibility between Mesa, VNC and the Qt-5 library. Older versions that rely on Qt-4 work extremely well on a compute node. The following versions have been tested.

Version Paraview Module GALLIUM_DRIVER Command line
4.3.1 chpc/compmech/Paraview/4.3.1 llvmpipe or swr paraview
5.2.0 chpc/compmech/Paraview/5.2.0 llvmpipe or swr paraview

Parallel Paraview

Paraview really comes into its own when it can make use of parallel processing (hence Paraview rather than Serialview, presumably). Limited testing has been done with the standard binary distributions of Paraview-5.3.0, 5.4.1 and 5.5.0. There are modules for these, module load chpc/compmech/Paraview/5.3.0 etc. will set up the appropriate environment. There are several ways in which Paraview can be used:

  1. Run with pvserver parallel on a compute node, and the front end somewhere else. This is the preferred method when dealing with large datasets.
  2. Run on a viz node (chpcviz1 or chpclic1) in single processor mode
  3. Run in parallel on viz node
  4. Run with dataserver parallel on a compute node, and the front end and render server on a viz node
  5. Use X-forwarding to run Paraview on a compute node with Mesa, with or without a parallel data server

All these approaches need the module loaded in each shell from which the process will be started.

Single processor mode

Load the module module load chpc/compmech/Paraview/5.4.1-VNC, and type vglrun paraview at the command prompt.

Preferred method - Parallel using compute nodes, and remote front end

This method makes use of Mesa and avoids the use of the viz node and VNC:

  • Log in to the cluster, and get an exclusive X-enabled PBS session on a compute node qsub -I -X -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00
  • In this terminal, load up the module module load chpc/compmech/Paraview/5.4.1
  • In this terminal, start up the paraview server with the command
    mpiexec -np 24 pvserver --mpi --mesa-llvm --use-offscreen-rendering

    . Although you will not launch any graphics from this terminal, the software needs the X-capability to be available.

  • On your workstation, build an ssh-tunnel through lengau.chpc.ac.za to port 11111 on your interactive compute node, let's call it cnode1234 for argument's sake. Forward this port to port 11111 on your workstation. The command in Linux is ssh -f jblogs@lengau.chpc.ac.za -L 11111:cnode1234:11111 -N
  • Launch Paraview (exactly the same version) on your workstation, which can be running any operating system that supports Paraview. It does not need to be Linux.
  • Use the “Connect” menu in Paraview to set up port 11111@localhost as a Paraview server. Connect to this server.
  • Load the data into Paraview in the usual fashion.
  • Data processing and rendering will happen on the compute node, display and interaction on your local workstation.
  • If you want to use Paraview-5.8.1 for this, load the OSmesa module of Paraview, as well as an appropriate MPI module on the compute node before starting the Paraview server:
    module load chpc/compmech/Paraview/5.8.1-osmesa
    module load chpc/compmech/mpich/4.0/gcc-9.2.0-ssh
    mpiexec -np 24 pvserver --mpi --force--offscreen-rendering 
  • For the latest version 5.12.1, the process is somewhat simpler. There is no need to load the mpich module and the –mpi command line option is not required:
    module load chpc/compmech/Paraview/5.12.1-osmesa
    mpiexec -np 24 pvserver --force--offscreen-rendering 

Parallel on the viz node

  • Open two terminals. Load the 5.4.1-VNC module in the one from which you will be launching the GUI, and the 5.4.1 module in the terminal where you will start the server.
  • In the server terminal, enter the command:
    vglrun mpiexec -np 6 pvserver --mpi --use-offscreen-rendering 
  • In the GUI terminal, start paraview with vglrun paraview, then connect to the server (you have to set it up first in the connect menu). Then open your parallel data set. If your dataset is not inherently parallel, there is no point in using parallel visualisation. In OpenFOAM, leave your case decomposed, use the touch bananas.foam trick and load up the case as decomposed. There appears to be a substantial performance advantage to decomposing (or re-decomposing) to the same number of processors as used for the Paraview data server.
  • Please do not use more than 6 parallel processes on the current viz nodes, which are interim stop-gap servers.
  • Do monitor (with top or htop) the load on the servers, and avoid overloading them. In due course, a more formal queue for visualisation and better servers will be established.

Parallel using compute nodes

The data server does not need any GPU hardware, which makes it possible to run it on compute nodes. The process is as follows:

  • Get one or more compute nodes, either by means of an interactive PBS session, or a PBS batch job.
  • On the compute node, load the chpc/compmech/Paraview/5.4.1 module.
  • From the master compute node (note down its hostname), start the Paraview server with the following command:
    mpiexec -np 24 pvserver --mpi --use-offscreen-rendering 

    . If you need more than one compute node, use a machinefile. If you are at the point where you need more than one compute node, you should already know how to do this.

  • On the viz node, start paraview with vglrun, as before, after loading the chpc/compmech/Paraview/5.4.1-VNC module.
  • Setup the selected compute node as a server that you can connect to, and do so.
  • This method gives reasonably good performance, especially if there is a lot of back end data processing involved.

Parallel using compute nodes and render server on the viz node

  • Get a compute node as before, load the chpc/compmech/Paraview/5.4.1 module, and start up just the data server with the following command:
    mpiexec -np 24 pvdataserver --mpi
  • On the viz node, open two terminals, load the chpc/compmech/Paraview/5.4.1 module in the server terminal and chpc/compmech/Paraview/5.4.1-VNC in the terminal from which you will be launching the GUI.
  • In the server terminal, start the render server with
    vglrun mpiexec -np 6 pvrenderserver --mpi --use-offscreen-rendering

    . Running the render server in parallel seems to provide a small improvement in performance.

  • In the other window, start paraview with vglrun paraview.
  • With the connection menu, set up a client / data server / render server connection with the appropriate hostnames and port numbers that you have meticulously noted down.
  • Establish the connection.
  • Load the parallel data set.

Use X-forwarding and Mesa to run Paraview entirely on a compute node

Paraview-5.3 and 5.4 are distributed with the Mesa software rendering libraries. Mesa can make very good use of the parallel and SIMD capabilities of the compute nodes, which means that 3D OpenGL graphics rendering without a dedicated GPU is surprisingly viable. The constraint on this method is the poor performance of X-forwarding, therefore this method is only viable if you have a high bandwidth connection to the cluster. If you want to run serial Paraview on a compute node, take the following steps:

  1. Log into lengau
  2. Get an X-enabled interactive PBS session, with a command like qsub -I -X -l select=1:ncpus=24:mpiprocs=24 -q smp -P MECH1234 -l walltime=4:00:00
  3. In this session, load the appropriate module: module load chpc/compmech/Paraview/5.4.1.
  4. In this session, you can launch Paraview with the command
    paraview --mesa-swr

    . It should work as normal, except for poor performance.

If you want to run parallel Paraview on a compute node, use exactly the same process, but in the initial interactive PBS session, start the Paraview server:

  1. module load chpc/compmech/Paraview/5.4.1
    mpiexec -np 24 pvserver --mpi --use-offscreen-rendering --mesa-swr
  2. If you are using a more recent version such as 5.8.1 (which has a custom-compiled “osmesa” version, accessible through a module“, use slightly different syntax:
    module load chpc/compmech/Paraview/5.8.1-osmesa
    mpiexec -np 24 pvserver --mpi --force-offscreen-rendering 
  3. Start paraview on the X-forwarding enabled session, exactly as described above.
  4. Once in Paraview, connect to the Paraview server running on port 11111 on the same node.
  5. You may need to trade off the number of MPI processes against the number of processes available for the Mesa software rendering.
  6. You can also run the Paraview server on an entirely separate compute node. This way you can have a node (or multiple nodes) dedicated to data serving, and another to render serving and display.

Any user asking for help when using the username jblogs, project name MECH1234 or the node cnode1234 will be banned from further cluster use and issued with a shovel for starting a new career.

/app/dokuwiki/data/pages/howto/paraview.txt · Last modified: 2024/08/29 11:24 by ccrosby