Table of Contents
Hardware specifications of cluster compute nodes
This page provides an overview of the compute cluster's hardware specifications, including node details, partitions, and configuration settings. We strive to keep this information up to date, but changes may occur. For the most accurate and current cluster details, please use the following command: clusterinfo
The LUIS computing cluster is a heterogeneous general purpose system designed for a variety of workloads. All nodes in a sub-cluster (“partition”) are interconnected using Mellanox Infiniband (at least QDR) non-blocking fat tree network. We use SLURM as the job scheduler.
By policy, the compute nodes cannot access the internet outside the computing cluster. Exceptions need to belong to the LUH network. If you need such an exception, contact cluster support stating IP address, port number(s), protocol(s) and account name(s) that should be allowed to use the exception as well as a contact person, the reason and duration of the exception. However, the compute nodes have access to cloud storage systems provides by LUIS. For detailed information please refer to the Rclone usage instructions.
You will notice that the columns “(useable) Memory/Node (MB)” and “Memory Total (GB)” differ slightly, which takes into account the difference of total physical memory per node vs. the memory configured in the batch scheduler SLURM avilable to jobs. The latter number is smaller since the operating system needs memory, too. If you want to autoritatively find out the maximum allocateable memory per node in SLURM, use the clusterinfo -n command on a login node.
Nodes running in the “FCH” service ("Forschungscluster-Housing", nodes owned by institutes that are integrated into the cluster) are too varied to be listed in these tables. They contribute significant additional power to the cluster, mostly during the night and over the weekend, but are usually reserved exclusively for institute accounts on week days. Your jobs have a chance of running in the night when they request less than 12 hours of walltime, or during weekends, for jobs that request less than 60 hours. You can find out more about nodes in this part of the cluster using the clusterinfo command on a login node.
Parallel Clusters (MPP)
|
Large Memory Servers (SMP)
|
GPU Servers
|
