Noctua 2 - Technical description
The overall setup of the Noctua 2 supercomputer with insights into the operation of the cluster from a hardware, software and facility perspective is described in the Noctua 2 paper.
System | Atos BullSequana XH2000 |
---|---|
Processor Cores | 143.872 |
Total Main Memory | 347,5 TiB |
Floating-Point Performance | CPU: 5.4 PFLOPS DP Peak (4.19 PFlop/s Linpack) GPU: 2.49 PFLOPS DP Tensor Core Peak (ca. 1.7 PFlop/s Linpack) |
Cabinets | 12 racks - direct liquid cooling, 7 racks - air cooling, four of them with active backdoor cooling |
Compute Nodes | 990 nodes, each with • 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz • 2x 64 cores • 256 GiB main memory |
Compute Nodes (Large Memory) | 66 nodes, each with • 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz • 2x 64 cores • 1024 GiB main memory |
Compute Nodes (Huge Memory) | 5 nodes, each with • 2x AMD Milan 7713, 2.0 GHz, , up to 3.675 GHz • 2x 64 Cores • 2 TiB main memory • 34 TiB SSD-based memory • 12x 3.2 TB NVMe SSDs, ~70 GB/s |
GPU Nodes | 32 nodes, each with • 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz • 2x 64 cores • 512 GiB main memory • 4x NVIDIA A100 with NVLink and 40 GB HBM2 |
NVIDIA DGX-A100 | one node • 2x AMD EPYC Rome 7742, 2.25 GHz, up to 3.4 GHz • 2x64 cores • 1024 GiB main memory • 8x NVIDIA A100 with NVLink and 40GB HBM2 |
FPGA Nodes | 36 FPGA nodes, each with • 2x AMD Milan 7713, 2.0 GHz, up to 3.675 GHz • 2x 64 cores • 512 GiB main memory 80 FPGA cards • 48x Xilinx Alveo U280 FPGA with 8GiB HBM2 and 32GiB DDR memory • 32x Intel Stratix 10 GX 2800 FPGA with 32 GiB DDR memory (Bittware 520N cards) |
Communication Network CPUs | Mellanox InfiniBand 100/200 HDR, 1:2 blocking factor |
Communication Network FPGAs | Application-specific interconnect • 4x40 Gbps links per FPGA • Connected via CALIENT S320 Optical Circuit Switch (OCS) • Configurable point-to-point connections to any other FPGA • Accessible as serial channels from OpenCL BSP |
Storage System | DDN Exascaler 7990X with NVMe accelerator |