User Tools

Site Tools


guide:filesystems

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
guide:filesystems [2020/05/18 16:10]
kevin
guide:filesystems [2021/05/04 22:15] (current)
kevin
Line 1: Line 1:
 ====== File Systems ====== ====== File Systems ======
 +
 +
 +
 +:!: There is a new Lustre parallel file system mounted on ''/​mnt/​lustre3p''​. ​ The old Lustre on ''/​mnt/​lustre''​ will be decommissioned after 1 June 2021 and so **//you must copy over the files you will need by// 31 May 2021.** See the [[howto:​datamigration|data migration how to]] for more information and guidance on how to copy your data. 
 +
 +
  
 The primary file systems available to users are: The primary file systems available to users are:
  
 ^ Name  ^ Mount point  ^  File System ^  Size ^  Quota ^  Backup ^ Access ​ ^ ^ Name  ^ Mount point  ^  File System ^  Size ^  Quota ^  Backup ^ Access ​ ^
-| "​Home" ​ | ''/​home'' ​ | NFS  | 80 TB  | 15 GB  | Yes  | Yes  |  +| "​Home" ​ | ''/​home'' ​ | NFS  | 80 TB  | 15 GB  | No*  | Yes  |  
-| "​Lustre" ​ | ''/​mnt/​lustre'' ​ | Lustre ​ | 4 PB  | none  | NO  | Yes  ​| ​+| **"​Lustre"​** ​ | **''/​mnt/​lustre3p/​users''​** ​ | Lustre ​ | **3 PB**  | none  | NO  | Yes  |  
 +| "//​Old// ​Lustre" ​ | ''/​mnt/​lustre'' ​ | Lustre ​ | 4 PB  | none  | NO  | **To be //​decommissioned//​ after 31 May 2021** ​ ​| ​
 | "​Apps" ​ | ''/​apps'' ​ | NFS  | 20 TB  | none  | Yes  | On request ​ |  | "​Apps" ​ | ''/​apps'' ​ | NFS  | 20 TB  | none  | Yes  | On request ​ | 
-| "Data" ​ | ''/​lustre/data'' ​ | Lustre ​ | 1 PB  | none  | NO  | On request only  | +| "Groups" ​ | ''/​mnt/lustre3p/​groups'' ​ | Lustre ​ | **1 PB**  ​| ​yes*  | NO  | On request only  | 
  
  
Line 30: Line 37:
 The Lustre file system is a high performance parallel file system which is to be used for all running jobs on the Lengau cluster. The Lustre file system is a high performance parallel file system which is to be used for all running jobs on the Lengau cluster.
  
-This is a work space, or "​scratch"​ file system and is **not** intended for long term storage. ​ Any files you store on Lustre must be required for your jobs.  Files that have not been //​accessed//​ for more than 60 days will be considered stale and may be deleted ​if the Lustre file systems fills up (exceeds 80% usage).+This is a work space, or "​scratch"​ file system and is **not** intended for long term storage. ​ Any files you store on Lustre must be required for your jobs.  Files that have not been //​accessed//​ for more than 90 days will be considered stale and will be deleted.  The data deletion process only removes files, and not directories. ​ The empty directories will be removed 7 days later with a separate process. ​  
 + 
 +If you rely in your workflow on certain directories that may have been deleted due to the clean-up process, modify your workflow to make it more robust. ​ You can do this by simply having the necessary ''​mkdir''​ command in your jobscript. ​ If the directory has been removed, this will re-create it.  If the directory already exists, it will do no harm.
  
 Note that Lustre has a complex structure and is best suited for large files and parallel access. See the separate [[guide:​lustre|Lustre guide]] for more information and important tips on use.  Note that Lustre has a complex structure and is best suited for large files and parallel access. See the separate [[guide:​lustre|Lustre guide]] for more information and important tips on use. 
Line 38: Line 47:
 The //working directory// for all your job scripts must be on Lustre. ​ Each user is allocated a directory on the Lustre file system under The //working directory// for all your job scripts must be on Lustre. ​ Each user is allocated a directory on the Lustre file system under
  
-  /mnt/lustre/​users/​username+  /mnt/lustre3p/​users/​username
  
 where ''​username''​ is replaced by //your// user name. where ''​username''​ is replaced by //your// user name.
Line 52: Line 61:
 Lustre is designed for //​performance//​ and capacity. Lustre is designed for //​performance//​ and capacity.
  
-To upload or download data to/from Lustre use the ''​scp''​ login node for small files (~10GB) or the Globus system for **large** files (>100GB).  See the [[guide:​connect|Connection Guide]] for more details and important tips.+To upload or download data to/from Lustre use the ''​scp''​ login node for small files (~10GB) or the Globus system for **large** files (>10GB).  See the [[guide:​connect|Connection Guide]] for more details and important tips.
  
  
Line 59: Line 68:
 The Apps file system contains application codes, libraries, development tools, compilers, etc., installed by the CHPC for our users. The Apps file system contains application codes, libraries, development tools, compilers, etc., installed by the CHPC for our users.
  
-This system is accessed through the GNU modules tool ''​module''​. ​ Examples are provided in the [[quick:​start|Quick Start Guide]].+This system is accessed through the environment ​modules tool ''​module''​. ​ Examples are provided in the [[quick:​start|Quick Start Guide]].
  
-=====Data=====+=====Groups=====
  
-By special arrangement research groups may store files and data that are shared by members of that group for longer term (up to 365 days).+By special arrangement research groups may store files and data that are shared by members of that group for longer term (up to 365 days).  This sub-directory of Lustre is limited to 1PB (leaving 2PB for the main Lustre work space) and is subject to a strict quota per group of 1TB.  Where a strong motivation is made by a large research group this quota may be increased up to a maximum of 10TB.
  
/var/www/wiki/data/attic/guide/filesystems.1589811014.txt.gz · Last modified: 2020/05/18 16:10 by kevin