User Tools

Site Tools


howto:remote_viz

This is an old revision of the document!


Visualisation servers

The visualization servers are intended for single-process pre- and post-processing only, as well as GUI monitoring of running jobs. These servers are NOT intended for parallel processing or running compute tasks. The system administrators will terminate processes that do not fit the above description without warning. However, it is now possible to get a Virtual Desktop on a compute node, where you can use the full capabilities of the compute node without restriction.

Remote Visualization

There is a dedicated visualization server called chpcviz1, and an identical backup server called chpclic1. Please note that these servers should be used for pre- and post-processing only. If the servers are being overloaded, we reserve the right to kill your processes without warning. These servers mount the Lustre and NFS file systems, have 64 GB of RAM and 2 Intel Xeon E5-2640 processors, each with 6 processor cores, as well as NVidia Tesla K20m GPU cards. For security reasons, these servers do not have IP addresses visible from outside the CHPC. However, they can be accessed via the login node by means of SSH tunneling. Although it is possible to perform remote visualization by means of X-forwarding (log in with “ssh -X”), this approach is generally too slow to be of use if the user is not on the internal network. It is therefore preferable to use a remote desktop. However, standard VNC is too slow for this and does not work properly with OpenGL, the library mostly used for 3D graphics. The visualization server has been set up with TurboVNC and VirtualGL, which gets around both of these problems. Use the following process for remote visualization:

  • Log in to the system by normal means (command line ssh or PuTTY)
  • From the cluster login node, log in to chpcviz1, by means of ssh (command: ssh chpcviz1)
  • On chpcviz1, start up a TurboVNC server for your use. You can either leave this login session open or disconnect after starting the TurboVNC server, which will continue running untill it is manually shut down.
  • From your own system, set up an ssh tunnel to forward a port on your system to the appropriate port on chpcviz1
  • Use the TurboVNC client to connect to the VNC server on chpcviz1
  • Run your graphics program with vglrun, or run your entire virtual desktop session with the vglrun wrapper, as per the instructions given below.
  • When finished, close down the TurboVNC client, ssh into chpcviz1 again and kill the TurboVNC server session

Starting and Stopping the VNC server on chpcviz1

There is a default VNC server installed on chpcviz. Do not use it. Use TurboVNC, which can be started with a command like this:

/opt/TurboVNC/bin/vncserver :3 -geometry 1920×1080 -depth 24

First time instructions

However, the default startup script that TurboVNC supplies for your X-Windows session is faulty. We are working on providing a better default, but in the meantime, please use the following interim work-around:

  • When starting up a VNC session for the first time on this system, start the default VNC server rather than TurboVNC.
  • As with TurboVNC, the vncserver will ask you to specify a password if this is a first usage.
  • You can immediately kill the vncserver, you only need its XWindows startup script as a template.
  • Change directory into $HOME/.vnc
  • cp xstartup xstartup.turbovnc
  • Open xstartup.turbovnc in your favourite editor, and change the file so that it looks like the example below.
  • The reason for going through the above process is to make sure that xstart.turbovnc has the right permissions. You can also just create the file in a text editor, and afterwards correct the permissions: chmod 0700 xstartup.turbovnc
  • A word on Window managers: Mate and Xfce4 provide desktops, with menus, etc. Fluxbox is a minimalist Window manager. Fluxbox will start with a featureless black screen. Click the right mouse button to get a menu from which you can start an Xterm. You can resize windows with alt-right button.
  • It is not strictly speaking necessary to start your desktop session with the vglrun wrapper, but doing so makes it unnecessary to use it when running OpenGL applications in the desktop session.
  • Now start a TurboVNC session as per the usual manner.
  • If you have trouble with your configuration, or you have forgotten your VNC password, simply delete the $HOME/.vnc directory and start over.

Your xstartup.turbovnc file should look like this:

#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
# Uncomment out the next line if you to use the mate desktop
exec /bin/mate-session
# Uncomment the next line if you want to use the xfce4 desktop
#exec /bin/xfce4-session
# Uncomment the next two lines if you want to use the very lightweight window manager Fluxbox.  
#export PATH=/apps/chpc/compmech/fluxbox-1.3.7/bin:$PATH
#exec /apps/chpc/compmech/fluxbox-1.3.7/bin/startfluxbox

More than one VNC session can be run simultaneously. In this example, two other sessions are already running, therefore :3 is used to specify that this will be a session on virtual display 3. If this is unavailable, try :4, etc. Optionally specify an appropriate display resolution and colour depth, as in the above example. If you get a warning of unsupported resolution, try one of the standard resolutions. The port number used for a VNC session is 5900+n , where n is the number of the display. In this example, the VNC session will be served on port 5903.

The vncserver will continue running even after logging out. If you are no longer going to use it, please kill the server as follows:

/opt/TurboVNC/bin/vncserver -kill :3

where :3 should be changed to whichever display the server has been running on.

Setting up an ssh Tunnel

You cannot log into chpcviz1 directly from outside CHPC's network. However, it is easy to set up an ssh tunnel to it. There are a number of ways to set up such a tunnel:

Setting up an ssh Tunnel from a Linux or Cygwin client

This is an example of the command:

ssh -f user@lengau.chpc.ac.za -L 5903:chpcviz1:5903 -N

  • Obviously change “user” to your own user-id.
  • The -f option puts the ssh session in the background. If you wanted to log in anyway (to log in to chpcviz1 to start the VNC server, for example), omit this option.
  • The -L option is essential for tunneling.
  • The -N option prevents ssh from executing a remote command. This should also be omitted if you want an interactive session.
  • 5903:chpcviz1:5903 means that port 5903 on the localhost will be forwarded to port 5903 on chpcviz1. The port number on the local host is arbitrary, you can use any number greater than 1024. However, there is some merit in using consistent values. chpcviz1:5903 is the destination port, with the port number given by 5900+n, as described above.

Setting up an ssh Tunnel with PuTTY under Windows

Any VNC client can be used to connect to the TurboVNC server on chpcviz1. However, to take full advantage of the higher speed and configuration options of TurboVNC, use the TurboVNC client as well. It can be downloaded from http://sourceforge.net/projects/virtualgl/files/TurboVNC/ . The Windows installer includes a customized version of PuTTY. Once TurboVNC has been installed, run the PuTTY in the TurboVNC installation directory. In the left pane, expand the SSH option, and click on Tunnels. Add the port number for your local host in the source port box, and chpcviz1:5903 (use the port number that your VNC server is using) in the destination box.

Now click on “Add”.

Log in with your usual user-id and password.

Using the TurboVNC client

If you haven't done so already, install the TurboVNC client, either from http://sourceforge.net/projects/virtualgl/files/TurboVNC/ or from a repository for your version of Linux. Start the TurboVNC Viewer client, and specify localhost::5903 (or whichever local port number you have selected) as the VNC server. The documentation recommends using double colons and the full 590* number, but this also works:

Click on connect and log in with the VNC password you provided when starting the VNC server for the first time.

You should now get a remote desktop:

Clicking on the top left corner will open an “Options” menu:

Experiment with the various settings. You can trade off quality for speed. On a slow connection, use fast low quality settings to set up the scene, then request a “Lossless refresh” to get a high quality image.

Using VirtualGL

3D programs (Paraview, for example) mostly use OpenGL. In a normal VNC session, OpenGL is most likely to throw an error, or at best run with software rendering. In order to take advantage of the graphics processing hardware on the server, it is necessary to run OpenGL programs with the VirtualGL “wrapper”. For example:

/opt/VirtualGL/bin/vglrun /apps/chpc/compmech/CFD/Paraview-4.3.1-Linux-64bit/bin/paraview &

will run a recent version of Paraview. If you have started up your X-windows session with the vglrun wrapper as per the first time instructions given above, it is not necessary to use it when starting the OpenGL application. The following $HOME/.vnc/xstartup.turbovnc file makes life easier.

#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
# Uncomment out the next line if you to use the mate desktop
exec /opt/VirtualGL/bin/vglrun /bin/mate-session
# Uncomment the next line if you want to use the xfce4 desktop
#exec  /opt/VirtualGL/bin/vglrun /bin/xfce4-session
# Uncomment the next two lines if you want to use the very lightweight window manager Fluxbox.  
#export PATH=/apps/chpc/compmech/fluxbox-1.3.7/bin:$PATH
#exec /opt/VirtualGL/bin/vglrun /apps/chpc/compmech/fluxbox-1.3.7/bin/startfluxbox

Shut down the TurboVNC session when done

/opt/TurboVNC/bin/vncserver -kill :3

will shut down the VNC server for display 3, freeing up resources for other users.

Getting a Virtual Desktop on a compute node

It is now possible to run a VNC session directly on a compute node. The advantage of this is that far more compute power is available, as the standard compute nodes have 24 compute cores and either 128 GB or 64 GB of RAM. Even a 56-core, 1 TB fat node can be used this way, if your use case justifies it. The compute cores support AVX instructions, and can consequently render 3D graphics very effectively. The process is very similar to what is described above, with some important differences.

Get an interactive compute node

The command

qsub -X -I -l select=1:ncpus=24:mpiprocs=24:mem=120gb -l walltime=8:00:00 -q smp -P MECH1234

will get you a full compute node for 8 hours.

Launch the VNC server

Make sure that you have a suitable $HOME/.vnc/xstartup.turbovnc file:

#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
export PATH=/apps/chpc/compmech/fluxbox-1.3.7/bin:$PATH
exec /apps/chpc/compmech/fluxbox-1.3.7/bin/startfluxbox

This file loads up the minimalist window manager Fluxbox, rather than a full desktop. Fluxbox also works well on the standard visualization nodes, but the full desktop is not available on compute nodes, hence the requirement for a very lightweight window manager.

Now launch the VNC server:

/apps/chpc/compmech/TurboVNC-2.1.2/bin/vncserver :1 -depth 24 -geometry 1800x1000 

Use a resolution that fits your display. Make note of the compute node hostname.

Build an ssh tunnel to the compute node

This is identical to the process as described above for a visualization node, except that you now tunnel to the compute node that you have been allocated. If you are using Windows, you can set up the tunnel with PuTTY or MobaXterm, and from Linux, use the command line as in this example:

ssh -f jblogs@lengau.chpc.ac.za -L 5901:cnode1234:5901 -N

As usual, use your username and compute number, not jblogs and cnode1234!

Connect your VNC client

Now, as before, fire up your VNC client, preferably TurboVNC, and connect to port 5901 (or 1 in VNC shorthand) on localhost. Once connected, you will be confronted with a blank black screen. Click your right mouse button to pop up a small menu, and select xterm, which will give you a small terminal. You can resize the window by holding down the alt key and the right mouse button, and dragging the mouse. Refer to the Fluxbox homepage of you need help with other operations.

Running Interactive Software

Up to this point, the process has been basically identical to using a visualization node. However, the compute nodes do not support VirtualGL, as they do not have dedicated graphics cards. All is not lost however:

  1. Software GUIs with menus, etc. will generally just work.
  2. Software linking dynamically to OpenGL libraries will probably work well with the chpc/compmech/mesa/19.0.4_swr module. The currently installed Mesa-19.0.4 supports up to OpenGL-2.1 only. We are working on improving this installation to support up to at least OpenGL-3.2. Older programs, such as Paraview-4.3 /apps/chpc/compmech/CFD/ParaView-4.3.1-Linux-64bit/bin/paraview work very well with the Mesa-19.0.4 implementation.
  3. Some software vendors provide binaries that are statically linked to a Mesa library. These will generally work, but may be a bit slow.
  4. STARCCM+ has its own version of Mesa-SWR. Use the command line options -mesa -rr -rrthreads N , where N is the number of cores that should be used for graphics rendering. In this implementation, you can use up to 16 threads.

Notes on OpenSWR

Historically, Mesa software rendering has been a way of getting OpenGL-enabled software to work, albeit very slowly, on hardware without dedicated graphics-processing capabilities. However, the OpenSWR framework makes full use of the sse and avx capabilities of modern CPUs to produce excellent rendering performance.

/var/www/wiki/data/attic/howto/remote_viz.1557994183.txt.gz · Last modified: 2019/05/16 10:09 by ccrosby