User Tools

Site Tools


resources:computing_hardware

Hardware specifications of cluster compute nodes


  • The LUIS computing cluster is a heterogeneous general purpose system designed for a variety of workloads.
  • We currently run two separate scheduling/batch systems, Maui/PBS/Torque and SLURM. We will gradually move partitions running under Maui/PBS to SLURM.
  • By policy, compute nodes cannot access the internet. If you need an exception to this rule, contact cluster support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.
  • Job can have a maximum duration of 200 hours, and a user will not have more than 64 (or 768 CPU cores) running simultaneously and 2000 jobs in the queue at any given moment.
  • All nodes in a sub-cluster are interconnected using Mellanox Infiniband (at least QDR) non-blocking fat tree network.
  • NFS based HOME (for home directories) and Lustre based BIGWORK (for temporary files during computations) storage systems are available on all compute nodes.
  • The line “FCH” in this table aggregates all the nodes we run for various institutes of the LUH under the conditions of the service "Forschungsclusterhousing". They contribute significant additional power to the cluster, usually during the night and over the weekend, but are usually reserved exclusively for institute accounts on week days. This model also means that your jobs have a high chance of running in the night, when they request less than 12 hours of walltime, or during weekends, for jobs that request less than 60 hours.
Partitions running under SLURM (sbatch) 1)
Cluster Nodes CPUs Cores/Node Cores
Total
Memory
/Node (GB)
Memory
Total (GB)
Gflops
/Core (theoretical)
Partition
Amo 80 2x Intel Cascade Lake Xeon Gold 6230N
(20-core, 2.3GHz, 30MB Cache, 125W)
40 3200 192 15360 75 amo
GPU 4 2x Intel Xeon Gold 6230 CPU
2x NVIDIA Tesla V100 GPU
CPU: 40
GPU: 2×5120
CPU: 160
GPU: 40960
CPU: 128
GPU: 2×16
CPU: 64
GPU: 128
gpu
Partitions running under Maui/PBS/Torque (qsub)
Cluster Nodes CPUs Cores/Node Cores
Total
Memory
/Node (GB)
Memory
Total (GB)
Gflops
/Core (theoretical)
Partition Queue
Dumbo 18 4x Intel(R) IvyBridge Xeon E5-4650 v2
(10-core, 2.40 GHz, 25MB Cache, 95W)
40 720 512 9216 19 dumbo all
Haku 20 2x Intel Broadwell Xeon E5-2620 v4
(8-core, 2.10GHz, 20MB Cache, 85W)
16 320 64 1280 34 haku all
Lena 80 2x Intel Haswell Xeon E5-2630 v3
(8-core, 2.40GHz, 20MB Cache, 85W)
16 1280 64 5120 38 lena all
Taurus 24 2x Intel Skylake Xeon Gold 6130
(16-core, 2.10GHz, 22 MB Cache, 125W)
32 768 128 3072 67 taurus all





SMP
4 4x Intel Broadwell-EP Xeon E5-4655 v4
(8-core, 2.5GHz, 30MB Cache, 135W)
32 128 256 1024 40 smp all
9 4x Intel Westmere-EX Xeon E7-4830
(8-core, 2.13GHz, 24MB Cache, 105W)
32 288 256 2304 8.4 smp all
9 4x Intel Backton Xeon E7540
(6-core, 2.00GHz, 18MB Cache, 105W)
24 216 256 2304 8.0 puresmp all
3 4x Intel Westmere-EX Xeon E7-4830
(8-core, 2.13GHz, 24MB Cache, 105W)
32 96 1024 3072 8.5 helena
FCH 89 2464 20000 2) </sup>
GPU 1 2x Intel(R) Xeon(R) Silver 4116 CPU
2x NVIDIA Tesla P100 GPU
CPU: 24
GPU: 2×7168
CPU: 24
GPU: 7168
CPU: 96
GPU: 2×16
CPU: 96
GPU: 32
gpu
1)
see section about SLURM usage
2)
This line aggregates all the partitions of institutes participating in the FCH service, there is no partition called FCH
resources/computing_hardware.txt · Last modified: 2021/07/29 11:14 by zzzzgaus