User Tools

Site Tools


howto:amr-wind

Using AMR-Wind at the CHPC

AMR-Wind is a massively parallel, block-structured adaptive-mesh, incompressible flow sover for wind turbine and wind farm simulations. It is built on the AMReX framework for massively parallel, block-structured adaptive mesh refinement applications.

CHPC installations

There are two versions of the solver installed in the directory /home/apps/chpc/compmech/amr-wind:

  1. 23.09-7-nogpu for use on CPUs only.
  2. 23.09-7-gpu for use with NVidia GPUs.

Both versions have been compiled with gcc-8.3.0 and mpich-3.3. The appropriate version can be accessed by way of a module:

  1. module load chpc/compmech/amr-wind/23.09-7-nogpu
  2. module load chpc/compmech/amr-wind/23.09-7-gpu

Refer to the AMR-Wind input manual for the appropriate format of the input file.

CPU-only example job script

runAMR-Wind-CPU.pbs
#!/bin/bash
### Use 6 24-core compute nodes for this calculation
#PBS -l select=6:ncpus=24:mpiprocs=24
#PBS -l walltime=2:00:00
#PBS -q normal
#PBS -P MECH1234
#PBS -o /mnt/lustre/users/jblogs/amr-wind_test_cpu/stdout.txt
#PBS -e /mnt/lustre/users/jblogs/amr-wind_test_cpu/stderr.txt
cd /mnt/lustre/users/jblogs/amr-wind_test_cpu
module load chpc/compmech/amr-wind/23.09-7-nogpu
nproc=`cat $PBS_NODEFILE | wc -l`
mpirun -iface ib0 -np $nproc amr_wind inputs.txt > amr-wind-nogpu.out  

GPU-accelerated example job script

runAMR-Wind-GPU.pbs
#!/bin/bash
### Use 3 V-100 GPUs for this calculation
#PBS -l select=1:ncpus=6:mpiprocs=6:ngpus=3
#PBS -l walltime=1:00:00
#PBS -q gpu_3
#PBS -P MECH1234
#PBS -o /mnt/lustre/users/jblogs/amr-wind_test_gpu/stdout.txt
#PBS -e /mnt/lustre/users/jblogs/amr-wind_test_gpu/stderr.txt
cd /mnt/lustre/users/jblogs/amr-wind_test_gpu
module load chpc/compmech/amr-wind/23.09-7-gpu
mpirun -iface ib0 -np 3 amr_wind inputs.txt > amr-wind-gpu.out  
/app/dokuwiki/data/pages/howto/amr-wind.txt · Last modified: 2023/09/18 08:49 by ccrosby