Skip to main content

File Systems

caution

It is your responsibility as a user of our facilities to backup all your critical data. We only guarantee a daily backup of user data under /gpfs/home. Any other backup should only be done exceptionally under demand of the interested user.

Each user has several areas of disk space for storing files. These areas may have size or time limits, please read carefully all this section to know about the policy of usage of each of these filesystems. There are 5 different types of storage available in the cluster:

  • All-flash: mounted on /local is a shared filesystem which offers the best performance and it is accessible from all nodes
  • GPFS filesystem: GPFS is a distributed networked filesystem which has two partitions in this machine gpfs_home and gpfs_projects. Both can be accessed from login nodes and Data Transfer Machine but only gpfs_home is mounted on computing nodes too.
  • S3 Storage: An extra filesystem accessible from login node to store data objects
  • Local drive: Every node has an internal drive
  • Root filesystem: Is the filesystem where the operating system resides

Shared Filesystems

On Huawei cluster we have 2 filesystems shared between all nodes: All-flash and GPFS. The All-flash filesystem consists on an OceanStore Dorado 5000 V6 with 39TB of total capacity over NVMe disks. It is mounted on /local. Besides, the IBM General Parallel File System (GPFS) is a high-performance shared-disk file system providing fast, reliable data access from all nodes of the cluster to a global filesystem.

The following mounting points are used in the cluster:

  • /local: Mounts this Huawei exclusive filesystem with a 100G link, which offers the best performance for your jobs on the cluster. Every user has their own space in the path /local/<unixgroup>/<username>. It is suggested to move/copy your data files here in order to perform your executions and use it as a working directory.

  • /apps: Over this filesystem will reside the applications and libraries that have already been installed on the machine. Take a look at the directories to know the applications available for general use.

  • /home: This filesystem has the home directories of all the users, and when you log in you start in your home directory by default. Every user will have their own home directory to store own developed sources and their personal data. A default quota will be enforced on all users to limit the amount of data stored there. Also, it is highly discouraged to run jobs from this filesystem. Please run your jobs on your group’s /local instead.

  • /gpfs/projects: In addition to the home directory, there is a directory in /gpfs/projects for each group of users. For instance, the group bsc01 will have a /gpfs/projects/bsc01 directory ready to use. This space is intended to store data that needs to be shared between the users of the same group or project. A quota per group will be enforced depending on the space assigned by Access Committee. It is the project’s manager responsibility to determine and coordinate the better use of this space, and how it is distributed or shared between their users. This filesystem is mounted on computing nodes but using a 10G link shared between the rest of GPFS filesystems, so it is highly recommended to transfer your data and launch your jobs from /local/<unixgroup>/<username> in order to get a better performance.

  • /gpfs/scratch: In addition to the home directory and projects, there is a directory in /gpfs/scratch for each user. A quota per group will be enforced depending on the space assigned. This filesystem is mounted on computing nodes but using a 10G link shared between the rest of GPFS filesystems, so it is highly recommended to transfer your data and launch your jobs from /local/<unixgroup>/<username> in order to get a better performance.

S3 Storage

Storage node exclusive for Huawei cluster which consists on a OceanStore 100D P110 with 154TB of total capacity. Under the hood, it has 36 SATA disks of 7.277TB for data storage and 6 NVMe of 1.455TB for metadata. Here you can create your own space (aka bucket) and upload/download files (aka objects). For the first access you need to request the credentials t to support@bsc.es and configure as follows:

  $ aws configure
AWS Access Key ID [None]: BE900FAC20E29D230B #Access Certificate Provided
AWS Secret Access Key [None]: kock7ORh4d4gt/R127Tx1afWbuYGAjeSywJca1 #Security Certificate Provided
Default region name [None]:
Default output format [None]: json

Once configured, you can load the following module with a pre-defined function which ease its usage:

  module load s3

And those are the funcions for using this storage node:

  • s3_bucket-create <bucket-name>: creates a new bucket with name <bucket-name> for storing files (aka objects).
  • s3_bucket-delete <bucket-name>: deletes the backed <bucket-name> and all objects it could contain.
  • s3_bucket-list: displays the list of buckets created.
  • s3_file-put <bucket-name> <file-name>: stores the file <file-name> into the bucket <bucket-name>.
  • s3_file-get <bucket-name> <file-name>: retrieves the file <file-name> from the bucket <bucket-name>.
  • s3_file-list <bucket-name>: displays the list of files stored on the bucket <bucket-name>.
  • s3_file-delete <bucket-name> <file-name>: deletes the file <file-name> from the bucket <bucket-name>.

Local Drive

Every node has a local (or several) drive that can be used as a local scratch space to store temporary files during executions of one of your jobs. This space is mounted over /scratch/tmp/$JOBID directory and pointed out by $TMPDIR environment variable. If the node also has NVMe drives available, that extra space can be accessed through the path pointed by the $NVMEDIR variable (/nvme).

The amount of space within the /scratch filesystem depends on which node you are using. Here are the specifications for each type of node:

  • General purpose node: SAS 10K with 225GB available.
  • AI Training node: SSD with 840GB and 3 NVMe drives with 11TB in total.
  • AI Inference node: SSD with 840GB.

All data stored in these local drives at the compute nodes will not be available from the login nodes. You should use the directory referred to by $TMPDIR (or $NVMEDIR if it applies) to save your temporary files during job executions. This directory will automatically be cleaned after the job finishes.

Root Filesystem

The root file system, where the operating system is stored has its own partition.

There is a separate partition of the local drive mounted on /tmp that can be used for storing user data as you can read in Local Drive.