As part of the upcoming upgrade to Rocky Linux 9 on our main compute cluster, we have set up a small test cluster to facilitate a smooth transition and enable checking compatibility of your environments and applications to adapt your workflow in time before we make the switch. We strongly encourage to start testing as soon as possible.
login04.cluster.uni-hannover.de
taurus_rl
gpu_rl
To ensure fair access to the ressources, the following restrictions are in place:
taurus_rl
Partition:gpu_rl
Partition:Please plan your job submissions accordingly to adhere to these limits and, if possible, stay low-profile to give everybody a chance to test.
You can access the head node login04.cluster.uni-hannover.de
using the same credentials as normal.
There are three methods to connect:
ssh [your_username]@login04.cluster.uni-hannover.de
[your_username]
with your actual cluster username login04.cluster.uni-hannover.de
https://login04.cluster.uni-hannover.de
The list of software provided via modules (Lmod) has been updated. Older versions of some software have been removed, and newer versions have been installed.
Please note the following important change to environment variables:
$ARCH
$LUIS_CPU_ARCH
The $LUIS_CPU_ARCH
variable is used to identify the CPU type of compute and head nodes, replacing the old $ARCH
variable to avoid conflicts. The new possible values are:
sse
(replaces nehalem
)avx
(replaces sandybridge
)avx2
(replaces haswell
)avx512
(replaces skylake
)
Ensure that scripts or environment setups using the $ARCH
variable are updated to use $LUIS_CPU_ARCH
and the corresponding values.
You can submit jobs to the compute nodes using the SLURM workload manager. There are two partitions available:
taurus_rl
#!/bin/bash -l #SBATCH --job-name=test_job #SBATCH --partition=taurus_rl #SBATCH --ntasks=8 #SBATCH --nodes=2 #SBATCH --time=01:00:00 srun ./my_application
sbatch my_job_script.sh
gpu_rl
#!/bin/bash #SBATCH --job-name=gpu_test #SBATCH --partition=gpu_rl #SBATCH --ntasks=8 #SBATCH --gres=gpu:1 #SBATCH --time=02:00:00 srun ./my_gpu_application
sbatch my_gpu_job_script.sh
salloc
, including those with different resource requirements, to ensure compatibility of your environment with Rocky Linux 9. After successfully testing your setup interactively, proceed to the next step and submit batch jobs