Hardware specifications of cluster compute nodes


Cluster Nodes CPUs Cores/Node Cores
Total
(useable) Memory
/Node (MB)
Memory
Total (GB)
Gflops
/Core 1)
Local Disk
/Node (GB)
Partition 2)
Amo 80 2x Intel Cascade Lake Xeon Gold 6230N
(20-core, 2.3GHz, 30MB Cache, 125W)
40 3200 180.000 15360 75 400 (SATA SSD) amo
Dumbo 18 4x Intel(R) IvyBridge Xeon E5-4650 v2
(10-core, 2.40 GHz, 25MB Cache, 95W)
40 720 500.000 9216 19 17000 (SAS HDD) dumbo
Haku 20 2x Intel Broadwell Xeon E5-2620 v4
(8-core, 2.10GHz, 20MB Cache, 85W)
16 320 60.000 1280 34 80 (SATA SSD) haku
Lena 80 2x Intel Haswell Xeon E5-2630 v3
(8-core, 2.40GHz, 20MB Cache, 85W)
16 1280 60.000 5120 38 180 (SATA SSD) lena
Taurus 24 2x Intel Skylake Xeon Gold 6130
(16-core, 2.10GHz, 22 MB Cache, 125W)
32 768 120.000 3072 67 500 (SAS HDD) taurus


SMP
9 2x AMD EPYC 9534
(64-core, 2.45GHz, 256MB Cache, 280W)
128 1152 1.024.000 9216 40 800 (NVMe) smp
2 2x AMD EPYC 9354
(32-core, 3.25GHz, 256MB Cache, 280W)
64 128 1.020.000 2048 52 3600 (NVMe) helena



GPU
4 2x Intel Xeon Gold 6230 CPU
2x NVIDIA Tesla V100 16 GB GPU
CPU: 40
GPU: 2×5120
CPU: 160
GPU: 40960
CPU: 125.000 CPU: 512
GPU: 128
300 (SATA SSD)


gpu
3 2x Intel Xeon Gold 6342 CPU
2x NVIDIA A100 80GB GPU
CPU: 48 CPU: 288 CPU: 1.025.000 CPU: 3072 3500 (NVMe)
4 2x AMD EPYC 9555 CPU
4x NVIDIA H200 141GB GPU
CPU: 128 CPU: 512 CPU: 1.150.000 CPU: 4490 5900 (NVMe)
FCH various partitions 12-128 ~9000 3)
1)
Performance values are theoretical
2)
See section about SLURM usage
3)
This line aggregates all the partitions of institutes participating in the FCH service; there is no partition called FCH. For details run the command clusterinfo