User Tools

Site Tools


guide:file_systems

File systems


There are three storage systems in the cluster. Each storage provides one of the file systems that are important for your work: $HOME, $BIGWORK and $PROJECT.

The first two of these – $HOME and $BIGWORK – are mounted on all nodes in the Cluster. $PROJECT is only available on the login nodes and the transfer node. Some relevant properties of the file systems can be taken from the picture below.

Cluster file systems with specificationsFig. 1: Cluster file systems with specifications

Usage of the three main file systems

$HOME Your home directory is the directory in which you are normally placed directly after a login from the command line. You should only place small and particularly important files here that would be especially time-consuming to create anew, like templates for job setups, important scripts, configuration files for software etc. $HOME is provided by only one server that is connected via gigabit ethernet to the remainder of the system. In comparison to $BIGWORK, this file system is very slow, and in particular it is unsuitable to handle data intensive tasks. If you overload $HOME, all other users will notice.

Please note: If you you overstep your quota (see following section) in this directory, – which can happen very easily and quickly if one of your compute jobs writes large amounts of data and you accidentially or even (don’t do that!) intentionally write to $HOME – you will make yourself unable to log in graphically with tools like X2Go, and you will then need to delete files before you can continue to work.

So you should only place important files here that would be particularly difficult or laborious to re-create after a data loss. You should not access data in $HOME at all from within your compute jobs, since this would quiteprobably both result in a job that takes much longer to complete while impeding other users’ work, due to the technical properties described above. In addition, you should carefully point all directories that possibly have been set automatically by an application software - quite often those are temporary directories – to $BIGWORK. Also, do NOT try to use a symbolic link between $BIGWORK and $HOME in a compute job to creatively cheat around the circumstances. That does not help either, and the impact is almost the same as if you were just directly doing the wrong stuff. Use the environment variable $BIGWORK for convenient access in a computation, and other than that, avoid anything that is in your $HOME. Also have a look at the exercise.

Please note: Memory Aid: Never WORK at HOME. Just don’t do it. Do not try to abuse $HOME as a backup directory for your compute results, it simply is not built for that. DO NOT transfer files that are created on $BIGWORK by copying them to your $HOME simply because you can then transfer them graphically via the weblogin service. Use the proper tools to transfer files, do not shoot yourself in the foot and find out that it hurts to do so. Almost any week, we get two new cases of users who generated problems for themselves as a result of not respecting this rule – try not to be one of them.

Due to the limits in the amount of data and the implicit restriction to slowly changing data, the system can provide a daily backup of data in $HOME.

$HOME is mounted on all nodes in the cluster (all nodes see the same $HOME file system).

$BIGWORK Your bigwork directory. This provides a comparatively high amount of storage space. Because of the amount of data and due to the fact that the data is changing relatively fast, it is not possible to provide a backup for the data. In this directory, the major part of your work takes place, all computations as a principle should be done here. $BIGWORK is connected via InfiniBand and thus inherently much faster than $HOME, and it is also being provided by a whole group of servers to increase performance. It can also be referred to as ‘scratch’.

$BIGWORK is mounted on all nodes in the cluster (all nodes see the same $BIGWORK file system).

$PROJECT Your project directory. Each project gets assigned a project directory to which all members of a project have read&write access at /project/<your-cluster-groupname> (the environment-variable $PROJECT points to this location). In order to store individual user accounts’ data in this directory in such a manner that everyone can keep track of what belongs to whom, we suggest each account create their own sub directory named $PROJECT/$USER and set suitable access rights (like mkdir -m 0700). The group’s quota for the project directory is usually set to 10 TB. The directory is only available on the login nodes and the transfer node, implying that you can not use it from within jobs on the compute nodes (please use $BIGWORK for that purpose). Also, copying between $BIGWORK and $PROJECT is only possible on the login nodes and – preferrably, because it is designated for such tasks – the transfer node. On these nodes, a fast connection between both file systems is available. The project storage is a separate storage apart from $BIGWORK with a high bandwidth connection (Lustre, Infiniband). It is intended for long time retention of both input data and results and it is configured in such a way that all data are physically written in two copies. Due to the amount of data, however, no additional backup is provided.

Please note: Backing up your data regularly from $BIGWORK to $PROJECT storage or to your institute’s server is essential, since $BIGWORK is designed as scratch file system.

Quota and grace time

On the storage systems, only a fraction of the whole disk space is made available to you or your account, respectively. This amount is designated as quota. There is a soft quota and a hard quota. A hard quota is an upper bound which can not be exceeded. The soft quota, on the other hand, may be exceeded for some time – the so-called grace time. Exceeding your soft quota starts the grace time. During this grace time, you are allowed to exceed your soft quota, up to your hard quota. After this period, you will not be able to store any more data, unless you reduce disk space usage below the soft quota. As soon as your disk space consumption falls below the soft quota, your grace time counter for that file system and that parameter is reset.

The quota mechanism protects users and system against possible errors of others, limits the maximal disk space available to an individual user, and keeps the system performance as high as possible. In general, we ask you to please delete files which are no longer needed. Low disk space consumption is especially helpful on $BIGWORK in order to optimise system performance. You can query your disk space usage and quota with the command checkquota – see also exercise.

Please note: If your quota is exhausted on $HOME, you will not be able to login graphically using X2Go any more. Connecting using ssh (without -X) will still be possible.

Bigwork’s file system Lustre and stripe count

Please note: All statements made in this section also apply to $PROJECT storage

On the technical level, $BIGWORK is comprised of multiple components which make up the storage system. Generally speaking, it is possible to use $BIGWORK without changing any default values. However, it may be useful under certain circumstances to change the so called . For larger files and parallel computations that access different parts of the same file, this may result in a higher performance and a better-balanced use of the overall system, which in turn is beneficial for all users.

Data on $BIGWORK is saved on OSTs, Object Storage Targets. Each OST in turn consists of a number of hard disks. By default, files are written to a single OST each, regardless of their size. This corresponds to a stripe count of one. The stripe count determines how many OSTs will be used to store data, figuratively speaking: in how many stripes a file is being split. Splitting data over multiple OSTs can increase access speeds, since the read and write speeds of several OSTs and thus a higher number of hard drives are used in parallel. At the same time, one should only distribute large files in this way, because access times can also increase if you have too many small requests to the file system. Depending on your personal use case you may need to experiment somewhat to find out the best setting.

Please note: If you are working with files larger than 1 GB, and for which access times e.g. from within parallel computations could significantly contribute to the total duration of a compute job, please consider setting stripe count manually according to section.

Stripe count is set as an integer value representing the number of OSTs to use, with -1 indicating all available OSTs. It is advised to create a directory below $BIGWORK and set a stipe count of -1 for it. This directory can then be used e.g. to store all files that are larger than 100 MB. For files significantly smaller than 100 MB, the default stripe count of one is both better and sufficient.

Please note:In order to alter the of existing files, these need to be copied, see section. Simply moving files with mv is not sufficient in this case.

Environment variable $TMPDIR

Within jobs, $TMPDIR points to local storage available directly on each node. Whenever local storage is needed, $TMPDIR should be used.

Please note: As soon as a job finishes, all data stored under $TMPDIR will be deleted automatically.

Do not simlply assume $TMPDIR to be faster than $BIGWORK – test it. $TMPDIR can be used for temporary files for applications that imperatively require a dedicated temporary directory.

Excercise: Using file systems

# where are you? lost? print working directory!
pwd

# change directory to your bigwork/project/home directory
cd $BIGWORK
cd $PROJECT
cd $HOME

# display your home, bigwork & project quota
checkquota

# make personal directory in your group's project storage
# set permissions (-m) so only your account can access
# the files in it (0700)
mkdir -m 0700 $PROJECT/$USER

# copy the directory mydir from bigwork to project
cp -r $BIGWORK/mydir $PROJECT/$USER

Advanced Excercise: setting stripe count

# get overall bigwork usage, note different fill levels
lfs df -h

# get current stripe settings for your bigwork
lfs getstripe $BIGWORK

# change directory to your bigwork
cd $BIGWORK

# create a directory for large files (anything over 100 MB)
mkdir LargeFiles

# get current stripe settings for that directory
lfs getstripe LargeFiles

# set stripe count to -1 (all available OSTs)
lfs setstripe -c -1 LargeFiles

# check current stripe settings for LargeFiles directory
lfs getstripe LargeFiles

# create a directory for small files
mkdir SmallFiles

# check stripe information for SmallFiles directory
lfs getstripe SmallFiles

Use newly created LargeFiles directory to store large files

Advanced Excercise: altering stripe count

Sometimes you might not know beforehand, how large files created by your simulations will turn out. In this case you can set stripe size after a file has been created in two ways. Let us create a 100 MB file first.

# enter the directory for small files
cd SmallFiles

# create a 100 MB file
dd if=/dev/zero of=100mb.file bs=10M count=10

# check filesize by listing directory contents
ls -lh

# check stripe information on 100mb.file
lfs getstripe 100mb.file

# move the file into the large files directory
mv 100mb.file ../LargeFiles/

# check if stripe information of 100mb.file changed
lfs getstripe ../LargeFiles/100mb.file

# remove the file
rm ../LargeFiles/100mb.file

In order to change stripe, the file has to be copied (cp). Simply moving (mv) the file will not affect stripe count.

First method:

# from within the small files directory
cd $BIGWORK/SmallFiles

# create a 100 MB file
dd if=/dev/zero of=100mb.file bs=10M count=10

# copy file into the LargeFiles directory
cp 100mb.file ../LargeFiles/

# check stripe in the new location
lfs getstripe ../LargeFiles/100mb.file

Second method:

# create empty file with appropriate stripe count
lfs setstripe -c -1 empty.file

# check stripe information of empty file
lfs getstripe empty.file

# copy file "in place"
cp 100mb.file empty.file

# check that empty.file now has a size of 100 MB
ls -lh

# remove the original 100mb.file and work with empty.file
rm 100mb.file
guide/file_systems.txt · Last modified: 2021/07/01 09:08 by zzzzgaus