User Tools

Site Tools


howto:gromacs

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
howto:gromacs [2013/11/25 16:11]
swyngaard [Example Expected Output]
howto:gromacs [2020/04/17 11:33] (current)
kgovender
Line 1: Line 1:
 ====== Gromacs ====== ====== Gromacs ======
  
-===== Initial Setup ===== +Execute **qgromacs_2018**,​ **qgromacs_2018-2**,​**qgromacs_2019-4** or **qgromacs_2020-1** on login node and follow the prompts. 
-In order to use Gromacs append ​the following lines to your .profile file: +This script handles PBS setup and submission. 
-<code bash> +Sets up the mpi environment variables allowing you to run over multiple processors and nodes
-export MODULEPATH=/​opt/​gridware/​applications/​gromacs/​modules:​$MODULEPATH +Examples of script execution are provided below.
-module add gromacs/4.6.1 mpich2/​1.5 +
-</​code>​+
  
-To complete ​the setup log out of the cluster and back in again (or just ''​source''​ your .profile ​fileto update your environment variables. ​+                    EXAMPLE1 
 +  Enter project name/​shortname 
 +  CHEM0100 
 +  Enter default filename for all file options (without any extension to the file) 
 +  test 
 +  Enter number ​of nodes on which to run job 
 +  1 
 +  Enter total walltime ​(hour:​minute) 
 +  2:00 
 +  Enter email address 
 +  testing@gmail.com 
 +  Generated pbs file for test 
 +  Do you wish to submit job to cluster (y/n) 
 +  y 
 +   
 +  ​
  
-===== Example Job Submission Script ===== +    EXAMPLE2 [PLEASE TAKE NOTE OF EMPTY SPACES] 
-The following ​file is a Gromacs template ​job submission script that can used as a starting point for creating your own scripts+  Enter project name/​shortname 
-<file bash gromacs.job> +  CHEM0100 
-#This job tests GROMACS, it requires the input file min.steep0.tpr ​to be in the working directory.+  Enter default filename for all file options (without any extension to the file) 
 +  test 
 +  Enter number of nodes on which to run job 
 +   
 +  Enter total walltime (hour:minute) 
 +   
 +  Enter email address 
 +  testing@gmail.com 
 +  ​Generated pbs file for test 
 +  Do you wish to submit job to cluster (y/n) 
 +  y
  
-#MSUB -l nodes=2:ppn=12 +====== Gromacs on GPU ======
-#MSUB -l feature=dell +
-#MSUB -l walltime=24:00:00 +
-#MSUB -j oe +
-#MSUB -o out.log +
-#MSUB -d /​export/​home/​username/​scratch5/​gromacs +
-#MSUB -V+
  
-NP=`cat $PBS_NODEFILE | wc -l` +** //Please note that you need to first be added to the gpu_1 queue before you can submit jobs to the GPU cluster// **
-BIN=mdrun_mpi +
-ARGS="​-s min.steep0.tpr -deffnm min.steep0 -npme 4"+
  
-mpirun ​-np $NP -machinefile $PBS_NODEFILE $BIN $ARGS +Execute **qgromacs_2018-2_gpu**, **qgromacs_2018-6_gpu**, **qgromacs_2019-4_gpu** or **qgromacs_2020-1_gpu** on login node and follow the prompts. 
-</​file>​+Examples of script execution are provided below.
  
-This job submission script requires that the file ''​{{howto:min.steep0.tpr}}''​ be in the working directory specified above.+                    EXAMPLE1 
 +  Enter research programme name 
 +  CHEM0100 
 +  Enter default filename for all file options (without any extension to the file
 +  test 
 +  Enter total walltime (hour:minute) 
 +  2:00 
 +  Enter email address 
 +  testing@gmail.com 
 +  Generated pbs file for test 
 +  Do you wish to submit job to cluster (y/n) 
 +  y 
 +   
  
-**NOTE:** Please ensure that all file I/O associated with your job only occurs in a sub-directory of the ''​scratch5''​ partition. +    EXAMPLE2 [PLEASE TAKE NOTE OF EMPTY SPACES] 
- +  ​Enter research programme name 
-===== Job Submission ===== +  ​CHEM0100 
-Submit the job using the following command: +  ​Enter default filename for all file options (without any extension ​to the file
-<code bash> +  test 
-msub gromacs.job +  Enter total walltime ​(hour:minute
-</​code>​ +   
- +  ​Enter email address 
-===== Example Expected Output ===== +  ​testing@gmail.com 
-<​file>​ +  ​Generated pbs file for test 
-                         :​-) ​ G  R  O  M  A  C  S  (-: +  Do you wish to submit job to cluster ​(y/n
- +  ​y
-               Giant Rising Ordinary Mutants for A Clerical Setup +
- +
-                            :-)  VERSION 4.6.1  (-: +
- +
-        Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,  +
-           ​Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar, ​  +
-     Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,  +
-        Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,  +
-           Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,  +
-                Michael Shirts, Alfons Sijbers, Peter Tieleman, +
- +
-               Berk Hess, David van der Spoel, and Erik Lindahl. +
- +
-       ​Copyright (c) 1991-2000, University of Groningen, The Netherlands. +
-         ​Copyright (c) 2001-2012,​2013,​ The GROMACS development team at +
-        Uppsala University & The Royal Institute of Technology, Sweden. +
-            check out http://​www.gromacs.org for more information. +
- +
-         This program is free software; you can redistribute it and/or +
-       ​modify it under the terms of the GNU Lesser General Public License +
-        as published by the Free Software Foundation; either version 2.1 +
-             of the License, or (at your option) any later version. +
- +
-                              :-)  mdrun_mpi ​ (-: +
- +
-Option ​    ​Filename ​ Type         ​Description +
------------------------------------------------------------- +
-  -s min.steep0.tpr ​ Input        Run input file: tpr tpb tpa +
-  -o min.steep0.trr ​ Output ​      Full precision trajectory: trr trj cpt +
-  -x min.steep0.xtc ​ Output, Opt. Compressed trajectory (portable xdr format) +
--cpi min.steep0.cpt ​ Input, Opt.  Checkpoint file +
--cpo min.steep0.cpt ​ Output, Opt. Checkpoint file +
-  -c min.steep0.gro ​ Output ​      ​Structure file: gro g96 pdb etc. +
-  -e min.steep0.edr ​ Output ​      ​Energy file +
-  -g min.steep0.log ​ Output ​      Log file +
--dhdl min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
--field min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
--table min.steep0.xvg ​ Input, Opt.  xvgr/xmgr file +
--tabletf min.steep0.xvg ​ Input, Opt.  xvgr/xmgr file +
--tablep min.steep0.xvg ​ Input, Opt.  xvgr/xmgr file +
--tableb min.steep0.xvg ​ Input, Opt.  xvgr/xmgr file +
--rerun min.steep0.xtc ​ Input, Opt.  Trajectory: xtc trr trj gro g96 pdb cpt +
--tpi min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
--tpid min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
- -ei min.steep0.edi ​ Input, Opt.  ED sampling input +
- -eo min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
-  -j min.steep0.gct ​ Input, Opt.  General coupling stuff +
- -jo min.steep0.gct ​ Output, Opt. General coupling stuff +
--ffout min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
--devout min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
--runav min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
- -px min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
- -pf min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
- -ro min.steep0.xvg ​ Output, Opt. xvgr/xmgr file +
- -ra min.steep0.log ​ Output, Opt. Log file +
- -rs min.steep0.log ​ Output, Opt. Log file +
- -rt min.steep0.log ​ Output, Opt. Log file +
--mtx min.steep0.mtx ​ Output, Opt. Hessian matrix +
- -dn min.steep0.ndx ​ Output, Opt. Index file +
--multidir ​    ​min.steep0 ​ Input, Opt., Mult. Run directory +
--membed min.steep0.dat ​ Input, Opt.  Generic data file +
- -mp min.steep0.top ​ Input, Opt.  Topology file +
- -mn min.steep0.ndx ​ Input, Opt.  Index file +
- +
-Option ​      ​Type ​  ​Value ​  ​Description +
------------------------------------------------------- +
--[no]h ​      ​bool ​  ​no ​     Print help info and quit +
--[no]version bool   ​no ​     Print version info and quit +
--nice        int    0       Set the nicelevel +
--deffnm ​     string min.steep0 ​ Set the default filename for all file options +
--xvg         ​enum ​  ​xmgrace ​ xvg plot formatting: xmgrace, xmgr or none +
--[no]pd ​     bool   ​no ​     Use particle decompostion +
--dd          vector 0 0 0   ​Domain decomposition grid, 0 is optimize +
--ddorder ​    ​enum ​  ​interleave ​ DD node order: interleave, pp_pme or cartesian +
--npme        int    4       ​Number of separate nodes to be used for PME, -1 +
-                            is guess +
--nt          int    0       Total number of threads to start (0 is guess) +
--ntmpi ​      ​int ​   0       ​Number of thread-MPI threads ​to start (0 is guess+
--ntomp ​      ​int ​   0       ​Number of OpenMP threads per MPI process/​thread +
-                            to start (0 is guess) +
--ntomp_pme ​  ​int ​   0       ​Number of OpenMP threads per MPI process/​thread +
-                            to start (0 is -ntomp) +
--pin         ​enum ​  ​auto ​   Fix threads (or processes) to specific cores: +
-                            auto, on or off +
--pinoffset ​  ​int ​   0       The starting logical core number for pinning to +
-                            cores; used to avoid pinning threads from +
-                            different mdrun instances to the same core +
--pinstride ​  ​int ​   0       ​Pinning distance in logical cores for threads, +
-                            use 0 to minimize the number of threads per +
-                            physical core +
--gpu_id ​     string ​        List of GPU id's to use +
--[no]ddcheck bool   ​yes ​    Check for all bonded interactions with DD +
--rdd         ​real ​  ​0 ​      The maximum distance for bonded interactions with +
-                            DD (nm), 0 is determine from initial coordinates +
--rcon        real   ​0 ​      ​Maximum distance for P-LINCS (nm), 0 is estimate +
--dlb         ​enum ​  ​auto ​   Dynamic load balancing (with DD): auto, no or yes +
--dds         ​real ​  ​0.8 ​    ​Minimum allowed dlb scaling of the DD cell size +
--gcom        int    -1      Global communication frequency +
--nb          enum   ​auto ​   Calculate non-bonded interactions on: auto, cpu, +
-                            gpu or gpu_cpu +
--[no]tunepme bool   ​yes ​    ​Optimize PME load between PP/PME nodes or GPU/CPU +
--[no]testverlet bool   ​no ​     Test the Verlet non-bonded scheme +
--[no]v ​      ​bool ​  ​no ​     Be loud and noisy +
--[no]compact bool   ​yes ​    Write a compact log file +
--[no]seppot ​ bool   ​no ​     Write separate V and dVdl terms for each +
-                            ​interaction type and node to the log file(s) +
--pforce ​     real   ​-1 ​     Print all forces larger than this (kJ/mol nm) +
--[no]reprod ​ bool   ​no ​     Try to avoid optimizations that affect binary +
-                            ​reproducibility +
--cpt         ​real ​  ​15 ​     Checkpoint interval (minutes) +
--[no]cpnum ​  ​bool ​  ​no ​     Keep and number checkpoint files +
--[no]append ​ bool   ​yes ​    ​Append to previous output files when continuing +
-                            from checkpoint instead of adding the simulation +
-                            part number to all file names +
--nsteps ​     int    -2      Run this number of steps, overrides ​.mdp file +
-                            ​option +
--maxh        real   ​-1 ​     Terminate after 0.99 times this time (hours) +
--multi ​      ​int ​   0       Do multiple simulations in parallel +
--replex ​     int    0       ​Attempt replica exchange periodically with this +
-                            period (steps) +
--nex         ​int ​   0       ​Number of random exchanges to carry out each +
-                            exchange interval (N^3 is one suggestion). ​ -nex +
-                            zero or not specified gives neighbor replica +
-                            exchange. +
--reseed ​     int    -1      Seed for replica exchange, -1 is generate a seed +
--[no]ionize ​ bool   ​no ​     ​Do a simulation including the effect of an X-Ray +
-                            bombardment on your system +
- +
-Reading file min.steep0.tpr,​ VERSION 4.0.5 (single precision+
-Note: file tpx version 58, software tpx version 83 +
-Using 24 MPI processes +
- +
-Steepest Descents: +
-   ​Tolerance (Fmax) ​  ​= ​ 1.00000e+02 +
-   ​Number of steps    =       ​100000 +
-=>> PBS: job killed: walltime 3620 exceeded limit 3600 +
-</​file>​+
  
/var/www/wiki/data/attic/howto/gromacs.1385388696.txt.gz · Last modified: 2013/11/25 16:11 by swyngaard