test |
- test
- te4st2
- hallo
Noctua 2 | Noctua 1 | |
---|---|---|
System | Atos BullSequana XH2000 | Cray CS500 |
Processor Cores | 143,872 | 10,880 |
Total Main Memory | 347.5 TiB | 51 TiB |
Floating-Point Performance | CPU: 5.4 PFLOPS DP Peak (4.19 PFlop/s Linpack) GPU: 2.49 PFLOPS DP Tensor Core Peak (ca. 1.7 PFlop/s Linpack) | CPU: 841 TFLOPS DP Peak (535 TFLOPS Linpack) |
Cabinets | 12 racks - direct liquid cooling, 7 racks - air cooling, four of them with active backdoor cooling | 5 racks - active backdoor cooling, 1 rack - air cooling |
Communication Network CPUs | Mellanox InfiniBand 100/200 HDR, 1:2 blocking factor | Intel Omni Path 100 Gbps, 1:1.4 blocking factor |
Storage System | DDN Exascaler 7990X with NVMe accelerator Lustre File System with 6 PB capacity | Cray ClusterStor L300N with NXD flash accelerator Lustre File System with 720 TB capacity |
Compute Nodes | ||
Number of Nodes | 990 | 256 |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148, 2.4 GHz |
Cores per Node | 128 | 40 |
Main Memory | 256 GiB | 192 GiB |
Large Memory Nodes | ||
Number of Nodes | 66 | - |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | - |
Cores per Node | 128 | - |
Main Memory | 1024 GiB | - |
Huge Memory Nodes | ||
Number of Nodes | 5 | - |
CPUs per Node | 2x AMD Milan 7713, 2.0 GHz, , up to 3.675 GHz | - |
Cores per Node | 128 | - |
Main Memory | 2048 GiB | - |
Local Storage | 34 TiB SSD-based memory 12x 3.2 TB NVMe SSDs, ~70 GB/s | - |
GPU Nodes | ||
Number of Nodes | 32 | 18 |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148(F), 2.4 GHz |
Cores per Node | 128 | 40 |
Main Memory | 512 GiB | 192 |
Accelerators per Node | 4x NVIDIA A100 with NVLink and 40 GB HBM2 | 2x NVIDIA A40, each 48 GB GDDR6, 10,752 CUDA cores, 336 Tensor cores |
GPU-Development Nodes | ||
Number of Nodes | 1 | - |
CPUs per Node | 2x AMD EPYC Rome 7742, 2.25 GHz, up to 3.4 GHz | - |
Cores per Node | 128 | - |
Main Memory | 1024 GiB | - |
Accelerators per Node | 8x NVIDIA A100 with NVLink and 40GB HBM2 | - |
FPGA Nodes | ||
Number of Nodes | 36 | - |
CPUs per Node | 2x AMD Milan 7713, 2.0 GHz, up to 3.675 GHz | - |
Cores per Node | 128 | - |
Main Memory | 512 GiB | - |
Accelerators per Node | 48x Xilinx Alveo U280 FPGA with 8GiB HBM2 and 32GiB DDR memory 32x Intel Stratix 10 GX 2800 FPGA with 32 GiB DDR memory (Bittware 520N cards) | - |
FPGA-to-FPGA Communication Networks | ||
Optical Switch | CALIENT S320 Optical Circuit Switch (OCS), 320 ports | - |
Ethernet Switch | Huawei Cloudengine CE9860: 128-Port Ethernet Switch | - |