Execute qgromacs_2018, qgromacs_2018-2,qgromacs_2019-4 or qgromacs_2020-1 on login node and follow the prompts. This script handles PBS setup and submission. Sets up the mpi environment variables allowing you to run over multiple processors and nodes. Examples of script execution are provided below.
Note, please first include this module - even better if your add this line to your .bashrc:
module load chpc/easy_scripts
EXAMPLE1 Enter project name/shortname CHEM0100 Enter default filename for all file options (without any extension to the file) test Enter number of nodes on which to run job 1 Enter total walltime (hour:minute) 2:00 Enter email address testing@gmail.com Generated pbs file for test Do you wish to submit job to cluster (y/n) y
EXAMPLE2 [PLEASE TAKE NOTE OF EMPTY SPACES] Enter project name/shortname CHEM0100 Enter default filename for all file options (without any extension to the file) test Enter number of nodes on which to run job Enter total walltime (hour:minute) Enter email address testing@gmail.com Generated pbs file for test Do you wish to submit job to cluster (y/n) y
Please note that you need to first be added to the gpu_1 queue before you can submit jobs to the GPU cluster
Execute qgromacs_2018-2_gpu, qgromacs_2018-6_gpu, qgromacs_2019-4_gpu, qgromacs_2020-1_gpu or qgromacs_2021-1 on the login node and follow the prompts. Examples of script execution are provided below.
EXAMPLE1 Enter research programme name CHEM0100 Enter default filename for all file options (without any extension to the file) test Enter total walltime (hour:minute) 2:00 Enter email address testing@gmail.com Generated pbs file for test Do you wish to submit job to cluster (y/n) y
EXAMPLE2 [PLEASE TAKE NOTE OF EMPTY SPACES] Enter research programme name CHEM0100 Enter default filename for all file options (without any extension to the file) test Enter total walltime (hour:minute) Enter email address testing@gmail.com Generated pbs file for test Do you wish to submit job to cluster (y/n) y