The Lengau cluster at the CHPC includes 9 GPU compute nodes with a total of 30 Nvidia V100 GPU devices. There are 6 gpu200
n nodes with 3 GPUs in each, and 3 gpu400
n nodes with 4 GPUs in each.
GPU Node | CPU Cores | GPU Devices | Interface |
---|---|---|---|
gpu2001 | 36 | 3× Nvidia V100 16GB | PCIe |
gpu2002 | 36 | 3× Nvidia V100 16GB | PCIe |
gpu2003 | 36 | 3× Nvidia V100 16GB | PCIe |
gpu2004 | 36 | 3× Nvidia V100 16GB | PCIe |
gpu2005 | 36 | 3× Nvidia V100 32GB | PCIe |
gpu2006 | 36 | 3× Nvidia V100 32GB | PCIe |
gpu4001 | 40 | 4× Nvidia V100 16GB | NVlink |
gpu4002 | 40 | 4× Nvidia V100 16GB | NVlink |
gpu4003 | 40 | 4× Nvidia V100 16GB | NVlink |
Jobs that require 1, 2 or 3 GPUs can be allocated to any node, and will share the node if the job does not use all the GPU devices on that node. Jobs that require 4 GPUs can only be allocated togpu4*
nodes and will not be shared, obviously.
The GPU nodes have 192GiB so a job script that requests an entire node could access approximately 188GiB (the operating system needs about 4GiB).
Access to the GPU nodes is by PI application only through the CHPC Helpdesk.