Table of Contents

Instructions to Use the Test Rocky Linux 9 Cluster

As part of the upcoming upgrade to Rocky Linux 9 on our main compute cluster, we have set up a small test cluster to facilitate a smooth transition and enable checking compatibility of your environments and applications to adapt your workflow in time before we make the switch. We strongly encourage to start testing as soon as possible.

Overview of the Test Cluster

Slurm Partition Restrictions

To ensure fair access to the ressources, the following restrictions are in place:

Please plan your job submissions accordingly to adhere to these limits and, if possible, stay low-profile to give everybody a chance to test.

Accessing the Login Node

You can access the head node login04.cluster.uni-hannover.de using the same credentials as normal.

There are three methods to connect:

  1. SSH (Secure Shell)
    • Open a terminal on your local machine
    • Connect to the head node using the following command: ssh [your_username]@login04.cluster.uni-hannover.de
    • replace [your_username] with your actual cluster username

  2. X2Go
    • Ensure X2Go is installed properly on your local machine (see our hints in the cluster documentation)
    • Set up a new session in X2Go using the following settings:
      • Host: login04.cluster.uni-hannover.de
      • Login: Your username
      • Session Type: XFCE
      • Connect using your normal credentials

  3. OOD (Open OnDemand) Web Platform
    • Open your web browser and navigate to the OOD portal: https://login04.cluster.uni-hannover.de
    • Log in with your cluster credentials
    • Use the web-based interface to access the head node, submit jobs, manage files and start configured applications

Software Modules

The list of software provided via modules (Lmod) has been updated. Older versions of some software have been removed, and newer versions have been installed.

Environment Variable Change

Please note the following important change to environment variables:

The $LUIS_CPU_ARCH variable is used to identify the CPU type of compute and head nodes, replacing the old $ARCH variable to avoid conflicts. The new possible values are:

Ensure that scripts or environment setups using the $ARCH variable are updated to use $LUIS_CPU_ARCH and the corresponding values.

Submitting Jobs to Compute Nodes

You can submit jobs to the compute nodes using the SLURM workload manager. There are two partitions available:

Best Practices for Testing