User Tools

Site Tools


howto:remote_viz

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
howto:remote_viz [2020/07/06 14:42]
ccrosby
howto:remote_viz [2020/11/11 15:03] (current)
ccrosby [Getting a Virtual Desktop on a compute node]
Line 4: Line 4:
 ====== How to Videos ====== ====== How to Videos ======
   - Basic ssh login : [[https://​youtu.be/​XEPZLtDf1Bw|ssh login to Lengau]]   - Basic ssh login : [[https://​youtu.be/​XEPZLtDf1Bw|ssh login to Lengau]]
-  - +  - Setting up ssh keys : [[https://​youtu.be/​AfXtM7t4MGs|ssh keys]] 
 +  - Getting a VNC session on a visualization node : [[https://​youtu.be/​t3-Vc7zjxfQ|VizNode VNC session]] 
 +  - Getting a VNC session on a compute node : [[https://​youtu.be/​EDpB-pB08Vo|Compute Node VNC session]]
  
  
Line 144: Line 146:
  
 === Launch the VNC server === === Launch the VNC server ===
-Make sure that you have a suitable $HOME/​.vnc/​xstartup.turbovnc file:+Make sure that you have a suitable $HOME/​.vnc/​xstartup.turbovnc file. The TurboVNC installation has been customized to generate a suitable version of this file be default when it is run for the first time, but if you have difficulties,​ ensure that it looks like this:
  
 <​code>​ <​code>​
Line 176: Line 178:
 #PBS -m abe #PBS -m abe
 #PBS -M justsomeuser@gmail.com #PBS -M justsomeuser@gmail.com
-/​apps/​chpc/​compmech/​TurboVNC-2.2.3/​bin/​vncserver :1 -depth 24 -geometry 1600x900+###  This following line will write the hostname of your compute node to the file hostname.txt 
 +hostname > hostname.txt 
 +/​apps/​chpc/​compmech/​TurboVNC-2.2.5/​bin/​vncserver :1 -depth 24 -geometry 1600x900
 sleep 2h sleep 2h
-/​apps/​chpc/​compmech/​TurboVNC-2.2.3/​bin/​vncserver -kill :1+/​apps/​chpc/​compmech/​TurboVNC-2.2.5/​bin/​vncserver -kill :1
 </​file>​ </​file>​
  
-Obviously customise this script to suit your own circumstances. ​ The above script will provide 4 cores for two hours. ​ DO NOT use more than 4 cores in the VNC session. ​ If you intend using a full compute node, use the smp queue and request 24 cores. ​ If you need more than 2 hours, specify the walltime as well as the sleep time accordingly. ​ Get the compute node hostname by querying PBS: ''​qstat -awu username''​ will give you the jobnumbers of all your jobs (please substitute "​username"​ with your OWN username). ​ ''​qstat -n1 jobnumber''​ will give you the hostname of the compute node(s) that you are using for that job.+Obviously customise this script to suit your own circumstances. ​ The above script will provide 4 cores for two hours. ​ DO NOT use more than 4 cores in the VNC session. ​ If you intend using a full compute node, use the smp queue and request 24 cores. ​ If you need more than 2 hours, specify the walltime as well as the sleep time accordingly. ​ Get the compute node hostname ​either by looking in the file hostname.txt or by by querying PBS: ''​qstat -awu username''​ will give you the jobnumbers of all your jobs (please substitute "​username"​ with your OWN username). ​ ''​qstat -n1 jobnumber''​ will give you the hostname of the compute node(s) that you are using for that job.
  
  
Line 195: Line 199:
  
 === Connect your VNC client === === Connect your VNC client ===
-Now, as before, fire up your VNC client, preferably TurboVNC, and connect to port 5901 (or 1 in VNC shorthand) on localhost. ​ Once connected, you will be confronted with a blank black screen. ​ Click your right mouse button to pop up a small menu, and select xterm, which will give you a small terminal. ​ You can resize the window by holding down the alt key and the right mouse button, and dragging the mouse. ​ Refer to the [[http://​fluxbox.org|Fluxbox homepage]] if you need help with other operations.+Now, as before, fire up your VNC client, preferably TurboVNC, and connect to port 5901 (or 1 in VNC shorthand) on localhost. ​ Once connected, you will be confronted with a blank grey screen. ​ Click your right mouse button to pop up a small menu, and select xterm, which will give you a small terminal. ​ You can resize the window by holding down the alt key and the right mouse button, and dragging the mouse. ​ Refer to the [[http://​fluxbox.org|Fluxbox homepage]] if you need help with other operations.
  
 === Running Interactive Software === === Running Interactive Software ===
 Up to this point, the process has been basically identical to using a visualization node.  However, **the compute nodes do not support VirtualGL**,​ as they do not have dedicated graphics cards. ​ All is not lost however: Up to this point, the process has been basically identical to using a visualization node.  However, **the compute nodes do not support VirtualGL**,​ as they do not have dedicated graphics cards. ​ All is not lost however:
   - Software GUIs with menus, etc. will generally just work.   - Software GUIs with menus, etc. will generally just work.
-  - Software linking dynamically to OpenGL libraries will probably work well with the chpc/​compmech/​mesa/​19.1.2_swr ​module.  The currently installed Mesa-19.1.2 supports software rendering using either LLVMpipe or Intel'​s swr.  By default the module activates the LLVMpipe method, but this is easily changed: ''​export GALLIUM_DRIVER=swr''​ or ''​export GALLIUM_DRIVER=llvmpipe''​.  The advantage of the LLVMpipe implementation is that it supports a more recent version of OpenGL+  - Software linking dynamically to OpenGL libraries will probably work well with the chpc/​compmech/​mesa/​20.2.2_swr or chpc/​compmech/​mesa/​19.1.2_swr ​modules.  The currently installed Mesa-20.2.2 supports software rendering using either LLVMpipe or Intel'​s swr.  By default the module activates the LLVMpipe method, but this is easily changed: ''​export GALLIUM_DRIVER=swr''​ or ''​export GALLIUM_DRIVER=llvmpipe''​.  
-  - Older programs, such as Paraview-4.3 ''/​apps/​chpc/​compmech/​CFD/​ParaView-4.3.1-Linux-64bit/​bin/​paraview''​ work very well with the Mesa-19.1.2 implementation. ​ More recent ParaView versions are built with --mesa-swr and --mesa-llvm support. ​ Although these work well with X-forwarding,​ they don't work in the VNC environment. ​ We are not sure why, but we are working on it.+  - Older programs, such as Paraview-4.3 ''/​apps/​chpc/​compmech/​CFD/​ParaView-4.3.1-Linux-64bit/​bin/​paraview''​ work very well with the Mesa-20.2.2 implementation. ​ More recent ParaView versions are built with --mesa-swr and --mesa-llvm support. ​ Although these work well with X-forwarding,​ they don't work in the VNC environment. ​ We are not sure why, but we are working on it.
   - Some software vendors provide binaries that are statically linked to a Mesa library. ​ These will generally work, but may be a bit slow.    - Some software vendors provide binaries that are statically linked to a Mesa library. ​ These will generally work, but may be a bit slow. 
   - STARCCM+ has its own version of Mesa-SWR. ​ Use the command line options ''​-mesa -rr -rrthreads N''​ , where N is the number of cores that should be used for graphics rendering. ​ In this implementation,​ you can use up to 16 threads.   - STARCCM+ has its own version of Mesa-SWR. ​ Use the command line options ''​-mesa -rr -rrthreads N''​ , where N is the number of cores that should be used for graphics rendering. ​ In this implementation,​ you can use up to 16 threads.
  
-=== Notes on OpenSWR ​===  +=== Notes on Mesa Software Rendering ​===  
-Historically,​ Mesa software rendering has been a way of getting OpenGL-enabled software to work, albeit very slowly, on hardware without dedicated graphics-processing capabilities. ​ However, the [[http://​www.openswr.org|OpenSWR]] framework makes full use of the sse and avx capabilities of modern CPUs to produce good rendering performance. ​  ​Recent versions of the alternative [[https://​www.mesa3d.org/​llvmpipe.html|LLVMpipe]] implementation have similar performance,​ and the ''​apps/​chpc/​compmech/​mesa/​19.1.2_swr''​ module defaults to LLVMpipe, because it supports more recent OpenGL features.+Historically,​ Mesa software rendering has been a way of getting OpenGL-enabled software to work, albeit very slowly, on hardware without dedicated graphics-processing capabilities. ​ However, the [[http://​www.openswr.org|OpenSWR]] framework makes full use of the sse and avx capabilities of modern CPUs to produce good rendering performance. ​  ​Recent versions of the alternative [[https://​www.mesa3d.org/​llvmpipe.html|LLVMpipe]] implementation have similar performance,​ and the ''​apps/​chpc/​compmech/​mesa/​20.2.2_swr''​ module defaults to LLVMpipe.
  
/var/www/wiki/data/attic/howto/remote_viz.1594039358.txt.gz · Last modified: 2020/07/06 14:42 by ccrosby