Magma is accessed by first loading the module
module load chpc/math/magma/2.26
and then simply executing the command
magma
However, Magma is licensed software and only licensed to run on the large memory nodes (usually called the “fat” nodes).
You will thus need to request a fat node using qsub
for an interactive session or by submitting a job script.
To request a login shell on a fat node use
qsub -I -P PROJECTID -q bigmem -W group_list=bigmemq -l select=1:ncpus=4:mpiprocs=4:nodetype=haswell_fat
where you must replace PROJECTID
by your project name.
If you receive the error message:
qsub: Bad GID for job execution
then you do not currently have access to the bigmem
queue and must ask your PI to request that access through our Helpdesk.
Otherwise you should see a message similar to
qsub: waiting for job 4068905.sched01 to start
Note: the job ID number (“4068905” in the above example) will be different for you.
Depending on how busy the bigmem
queue is you could wait several minutes (or hours) for the session to start — which is why it is preferable to use a job script.
When the interactive session starts you will see that your command line prompt changes from login1
to fat01
(or other fat0
N). Now you can load the module as above:
module load chpc/math/magma/2.26
and start Magma with the command
magma
After you have finished running Magma and exited back to the command shell (the fat01
prompt) simply use the Ctrl-D key combination to end the interactive session.
fat04:~$ module load chpc/math/magma/2.26 fat04:~$ magma Magma V2.26-10 Fri Mar 18 2022 09:59:31 on fat04 [Seed = 3853761563] Type ? for help. Type <Ctrl>-D to quit. > Total time: 0.399 seconds, Total memory usage: 32.09MB
Please see the CHPC Quick Start Guide and PBSPro Scheduler Guide for information on job scripts and the scheduler used on the CHPC cluster.
This video demonstrates job scripts for the bigmem
queue and fat nodes:
This example job script requests 8 cores on a fat node, and a maximum wall time (run time) of 1 hour and 30 minutes.
#!/bin/bash #PBS -N magma-example #PBS -l select=1:ncpus=8:mpiprocs=8:nodetype=haswell_fat #PBS -l walltime=1:30:00 #PBS -P PROJECTID #PBS -q bigmem #PBS -W group_list=bigmemq CWD=/mnt/lustre/users/$USER/workingdirectory cd $CWD module load chpc/math/magma/2.26 nproc=`cat $PBS_NODEFILE | wc -l` magma MAGMAPROGTORUN
Note:
PROJID
with your project name.workingdirectory
with the path of your working directory on Lustre.MAGMAPROGTORUN
with the appropriate options and arguments to the magma application.
Save the file, for example as magma1.pbs
and then submit to the scheduler with
qsub magma1.pbs
You can check the status of the queue with
qstat -u $USER
which should display something like:
Job id Name User Time Use S Queue ---------------- ---------------- ---------------- -------- - ----- 4088494.sched01 magma-example username 0 Q bigmem
The Q
under S
(for State) indicates that this job is queued and waiting to run. A running job would look like:
Job id Name User Time Use S Queue ---------------- ---------------- ---------------- -------- - ----- 4088497.sched01 magma-example username 00:38:21 R bigmem
with R
indicating running state.
username
will be replaced by your cluster account user name.