This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
howto:remote_viz [2020/07/06 11:43] ccrosby [Starting and Stopping the VNC server on chpcviz1] |
howto:remote_viz [2020/11/11 15:03] (current) ccrosby [Getting a Virtual Desktop on a compute node] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Visualisation servers ====== | ====== Visualisation servers ====== | ||
The visualization servers are intended for single-process pre- and post-processing only, as well as GUI monitoring of running jobs. These servers are NOT intended for parallel processing or running compute tasks. **The system administrators will terminate processes that do not fit the above description without warning.** However, it is now possible to get a [[http://wiki.chpc.ac.za/howto:remote_viz#getting_a_virtual_desktop_on_a_compute_node|Virtual Desktop on a compute node]], where you can use the full capabilities of the compute node without restriction. | The visualization servers are intended for single-process pre- and post-processing only, as well as GUI monitoring of running jobs. These servers are NOT intended for parallel processing or running compute tasks. **The system administrators will terminate processes that do not fit the above description without warning.** However, it is now possible to get a [[http://wiki.chpc.ac.za/howto:remote_viz#getting_a_virtual_desktop_on_a_compute_node|Virtual Desktop on a compute node]], where you can use the full capabilities of the compute node without restriction. | ||
+ | |||
+ | ====== How to Videos ====== | ||
+ | - Basic ssh login : [[https://youtu.be/XEPZLtDf1Bw|ssh login to Lengau]] | ||
+ | - Setting up ssh keys : [[https://youtu.be/AfXtM7t4MGs|ssh keys]] | ||
+ | - Getting a VNC session on a visualization node : [[https://youtu.be/t3-Vc7zjxfQ|VizNode VNC session]] | ||
+ | - Getting a VNC session on a compute node : [[https://youtu.be/EDpB-pB08Vo|Compute Node VNC session]] | ||
+ | |||
====== Remote Visualization ====== | ====== Remote Visualization ====== | ||
Line 41: | Line 48: | ||
The vncserver will continue running even after logging out. If you are no longer going to use it, please kill the server as follows: | The vncserver will continue running even after logging out. If you are no longer going to use it, please kill the server as follows: | ||
- | /opt/TurboVNC/bin/vncserver -kill :3 | + | /apps/chpc/compmech/TurboVNC-2.2.3/bin/vncserver -kill :3 |
where :3 should be changed to whichever display the server has been running on. | where :3 should be changed to whichever display the server has been running on. | ||
Line 123: | Line 130: | ||
===== Shut down the TurboVNC session when done ===== | ===== Shut down the TurboVNC session when done ===== | ||
- | /opt/TurboVNC/bin/vncserver -kill :3 | + | /apps/chpc/compmech/TurboVNC-2.2.3/bin/vncserver -kill :3 |
will shut down the VNC server for display 3, freeing up resources for other users. | will shut down the VNC server for display 3, freeing up resources for other users. | ||
Line 139: | Line 146: | ||
=== Launch the VNC server === | === Launch the VNC server === | ||
- | Make sure that you have a suitable $HOME/.vnc/xstartup.turbovnc file: | + | Make sure that you have a suitable $HOME/.vnc/xstartup.turbovnc file. The TurboVNC installation has been customized to generate a suitable version of this file be default when it is run for the first time, but if you have difficulties, ensure that it looks like this: |
<code> | <code> | ||
Line 153: | Line 160: | ||
Now launch the VNC server: | Now launch the VNC server: | ||
<code> | <code> | ||
- | /apps/chpc/compmech/TurboVNC-2.1.2/bin/vncserver :1 -depth 24 -geometry 1800x1000 | + | /apps/chpc/compmech/TurboVNC-2.2.3/bin/vncserver :1 -depth 24 -geometry 1800x1000 |
</code> | </code> | ||
Use a resolution that fits your display. Make note of the compute node hostname. | Use a resolution that fits your display. Make note of the compute node hostname. | ||
Line 171: | Line 178: | ||
#PBS -m abe | #PBS -m abe | ||
#PBS -M justsomeuser@gmail.com | #PBS -M justsomeuser@gmail.com | ||
- | /apps/chpc/compmech/TurboVNC-2.1.2/bin/vncserver :1 -depth 24 -geometry 1600x900 | + | ### This following line will write the hostname of your compute node to the file hostname.txt |
+ | hostname > hostname.txt | ||
+ | /apps/chpc/compmech/TurboVNC-2.2.5/bin/vncserver :1 -depth 24 -geometry 1600x900 | ||
sleep 2h | sleep 2h | ||
- | /apps/chpc/compmech/TurboVNC-2.1.2/bin/vncserver -kill :1 | + | /apps/chpc/compmech/TurboVNC-2.2.5/bin/vncserver -kill :1 |
</file> | </file> | ||
- | Obviously customise this script to suit your own circumstances. The above script will provide 4 cores for two hours. DO NOT use more than 4 cores in the VNC session. If you intend using a full compute node, use the smp queue and request 24 cores. If you need more than 2 hours, specify the walltime as well as the sleep time accordingly. Get the compute node hostname by querying PBS: ''qstat -awu username'' will give you the jobnumbers of all your jobs (please substitute "username" with your OWN username). ''qstat -n1 jobnumber'' will give you the hostname of the compute node(s) that you are using for that job. | + | Obviously customise this script to suit your own circumstances. The above script will provide 4 cores for two hours. DO NOT use more than 4 cores in the VNC session. If you intend using a full compute node, use the smp queue and request 24 cores. If you need more than 2 hours, specify the walltime as well as the sleep time accordingly. Get the compute node hostname either by looking in the file hostname.txt or by by querying PBS: ''qstat -awu username'' will give you the jobnumbers of all your jobs (please substitute "username" with your OWN username). ''qstat -n1 jobnumber'' will give you the hostname of the compute node(s) that you are using for that job. |
Line 190: | Line 199: | ||
=== Connect your VNC client === | === Connect your VNC client === | ||
- | Now, as before, fire up your VNC client, preferably TurboVNC, and connect to port 5901 (or 1 in VNC shorthand) on localhost. Once connected, you will be confronted with a blank black screen. Click your right mouse button to pop up a small menu, and select xterm, which will give you a small terminal. You can resize the window by holding down the alt key and the right mouse button, and dragging the mouse. Refer to the [[http://fluxbox.org|Fluxbox homepage]] if you need help with other operations. | + | Now, as before, fire up your VNC client, preferably TurboVNC, and connect to port 5901 (or 1 in VNC shorthand) on localhost. Once connected, you will be confronted with a blank grey screen. Click your right mouse button to pop up a small menu, and select xterm, which will give you a small terminal. You can resize the window by holding down the alt key and the right mouse button, and dragging the mouse. Refer to the [[http://fluxbox.org|Fluxbox homepage]] if you need help with other operations. |
=== Running Interactive Software === | === Running Interactive Software === | ||
Up to this point, the process has been basically identical to using a visualization node. However, **the compute nodes do not support VirtualGL**, as they do not have dedicated graphics cards. All is not lost however: | Up to this point, the process has been basically identical to using a visualization node. However, **the compute nodes do not support VirtualGL**, as they do not have dedicated graphics cards. All is not lost however: | ||
- Software GUIs with menus, etc. will generally just work. | - Software GUIs with menus, etc. will generally just work. | ||
- | - Software linking dynamically to OpenGL libraries will probably work well with the chpc/compmech/mesa/19.1.2_swr module. The currently installed Mesa-19.1.2 supports software rendering using either LLVMpipe or Intel's swr. By default the module activates the LLVMpipe method, but this is easily changed: ''export GALLIUM_DRIVER=swr'' or ''export GALLIUM_DRIVER=llvmpipe''. The advantage of the LLVMpipe implementation is that it supports a more recent version of OpenGL. | + | - Software linking dynamically to OpenGL libraries will probably work well with the chpc/compmech/mesa/20.2.2_swr or chpc/compmech/mesa/19.1.2_swr modules. The currently installed Mesa-20.2.2 supports software rendering using either LLVMpipe or Intel's swr. By default the module activates the LLVMpipe method, but this is easily changed: ''export GALLIUM_DRIVER=swr'' or ''export GALLIUM_DRIVER=llvmpipe''. |
- | - Older programs, such as Paraview-4.3 ''/apps/chpc/compmech/CFD/ParaView-4.3.1-Linux-64bit/bin/paraview'' work very well with the Mesa-19.1.2 implementation. More recent ParaView versions are built with --mesa-swr and --mesa-llvm support. Although these work well with X-forwarding, they don't work in the VNC environment. We are not sure why, but we are working on it. | + | - Older programs, such as Paraview-4.3 ''/apps/chpc/compmech/CFD/ParaView-4.3.1-Linux-64bit/bin/paraview'' work very well with the Mesa-20.2.2 implementation. More recent ParaView versions are built with --mesa-swr and --mesa-llvm support. Although these work well with X-forwarding, they don't work in the VNC environment. We are not sure why, but we are working on it. |
- Some software vendors provide binaries that are statically linked to a Mesa library. These will generally work, but may be a bit slow. | - Some software vendors provide binaries that are statically linked to a Mesa library. These will generally work, but may be a bit slow. | ||
- STARCCM+ has its own version of Mesa-SWR. Use the command line options ''-mesa -rr -rrthreads N'' , where N is the number of cores that should be used for graphics rendering. In this implementation, you can use up to 16 threads. | - STARCCM+ has its own version of Mesa-SWR. Use the command line options ''-mesa -rr -rrthreads N'' , where N is the number of cores that should be used for graphics rendering. In this implementation, you can use up to 16 threads. | ||
- | === Notes on OpenSWR === | + | === Notes on Mesa Software Rendering === |
- | Historically, Mesa software rendering has been a way of getting OpenGL-enabled software to work, albeit very slowly, on hardware without dedicated graphics-processing capabilities. However, the [[http://www.openswr.org|OpenSWR]] framework makes full use of the sse and avx capabilities of modern CPUs to produce good rendering performance. Recent versions of the alternative [[https://www.mesa3d.org/llvmpipe.html|LLVMpipe]] implementation have similar performance, and the ''apps/chpc/compmech/mesa/19.1.2_swr'' module defaults to LLVMpipe, because it supports more recent OpenGL features. | + | Historically, Mesa software rendering has been a way of getting OpenGL-enabled software to work, albeit very slowly, on hardware without dedicated graphics-processing capabilities. However, the [[http://www.openswr.org|OpenSWR]] framework makes full use of the sse and avx capabilities of modern CPUs to produce good rendering performance. Recent versions of the alternative [[https://www.mesa3d.org/llvmpipe.html|LLVMpipe]] implementation have similar performance, and the ''apps/chpc/compmech/mesa/20.2.2_swr'' module defaults to LLVMpipe. |