Cluster System Cheat Sheet
Introduction
This document is a quick reference for the cluster system.We’ve assembled some (hopefully) useful info and the mostfrequently used commands on a page that you can keep athand when working with the cluster system
Accessing the cluster system
Connect to login.cluster.uni-hannover.de
e.g. via ssh
for interactive work like editing files etc.
Do NOT try to start computations on the login nodes; the system will terminate anything using >30 minutes cputime (see ulimit -a
) to keep nodes responsive for all users.
For computations, use the batch system to submit scripts: qsub <options> <jobscript_name>
.
For file transfers, use transfer.cluster.uni-hannover.de
(typical commands:scp
, sftp
, rsync
).
Check for further information.
Batch system
An example script that can be submitted via qsub <job-script>
:
#!/bin/bash -l #PBS -N mysimulation #PBS -M ich@mail.adresse #PBS -l nodes=1:ppn=4 #PBS -l walltime=00:10:00 #PBS -l mem=3gb # node the job ran on echo "Job ran on:" $(hostname) # load the relevant modules module load icc # change to working directory cd $BIGWORK/mydir # run the simulation ./my_simulation
Batch system commands
Put job into the queue
$ qsub <options> <jobscript>
Interactive batch job with GUI
$ qsub -I -X
Show all jobs
$ qstat -a $ showq
Show all jobs with node information
$ qstat -n1
Show full information for a given job
$ qstat -f <jobid>
Delete a job from the queue
$ qdel <jobid>
Queues & Partitions
The default queue is named all, which should be sufficient for most jobs. For jobs requiring a GPU accelerating unit,add the line #PBS -q gpu
to your job script. To select a specific partition, use #PBS -W x=PARTITION:<partition>
whereas <partition>
is the name of a partition like lena, taurus, haku, smp, dumbo. Partitions provided by institutes of the LUH are usually reserved Mo-Fr 08:00-20:00, but available to all accounts outside this time. Thus, jobs requesting less than 12 hours see additional resources weekday nights. On weekends, 60hours outside reservations are available. lcpuarchs -vv
lists partitions including number of nodesand cpu architecture.
The Modules Environment
Show all modules the cluster provides
$ module spider
Show currently loadable modules(this depends on the toolchain already loaded)
$ module avail
Load one or more modules
$ module load <modulname> <...>
Unload a module
$ module unload <modulname>
Show all currently loaded modules
$ module list
Show information about a given module
$ module show <modulname>
More information about software.
Linux Commands
man <command>
: Show help for <command>
ls
: List directory contents
cd <directory>
: Change directory
rm
: Delete file
mkdir <directory>
: Create directory
passwd
: Change password
Contact Information
cluster-help@luis.uni-hannover.de