Co-array Fortran allows the programmer to develop distributed memory parallel Fortran programs without having to use MPI explicitly. Parallelisation is entirely contained within the Fortran language, using a PGAS (partitioned global address space) approach. In practice, Co-array Fortran is conveniently implemented by using OpenCoarrays, which is an interface built on top of MPI.
OpenCoarrays-1.8.4 has been installed in
/apps/chpc/compmech/OpenCoarrays-1.8.4 and is compiled using mpich-3.2 and gcc-6.2.0. A module has been created so simply adding:
module add chpc/compmech/OpenCoarrays/1.8.4
to your scripts will set up the appropriate paths to OpenCoarrays.
Co-array programs are compiled using the
caf wrapper, much like
mpif90 is used to compile MPI programs. The compiled binary is then executed using the
cafrun wrapper, similar to
cafrun in this case takes the same command line options as the
mpirun wrapper supplied by mpich. Please bear in mind that mpich will not by default use the infiniband network. To use the infiniband network, use the following syntax:
caf -o hello helloWorld.f90 cafrun -np 48 -machinefile $PBS_NODEFILE -iface ib0 ./hello
For what it's worth, mpich is aware of PBS, and will by default not only allocate images (Co-array speak for MPI's ranks) to the total number of MPI processes requested by qsub, but will also by default distribute them according to the PBS node allocation as contained in $PBS_NODEFILE. The following syntax produces exactly the same output, if 2 full nodes were requested by qsub:
caf -o hello helloWorld.f90 cafrun -iface ib0 ./hello
Please note that this will not necessarily work when using less than the full number of cores per node. It is probably good practice to explicitly state the number of images and to provide a machinefile.