test |
- test
- te4st2
- hallo
Noctua 2 | Noctua 1 | |
---|---|---|
System | Atos BullSequana XH2000 | Cray CS500 |
Processor Cores | 143,872 | 10,880 |
Total Main Memory | 347.5 TiB | 51 TiB |
Floating-Point Performance | CPU: 5.4 PFLOPS DP Peak (4.19 PFlop/s Linpack) GPU: 2.49 PFLOPS DP Tensor Core Peak (ca. 1.7 PFlop/s Linpack) | CPU: 841 TFLOPS DP Peak (535 TFLOPS Linpack) |
Cabinets | 12 racks - direct liquid cooling, 7 racks - air cooling, four of them with active backdoor cooling | 5 racks - active backdoor cooling, 1 rack - air cooling |
Communication Network CPUs | Mellanox InfiniBand 100/200 HDR, 1:2 blocking factor | Intel Omni Path 100 Gbps, 1:1.4 blocking factor |
Storage System | DDN Exascaler 7990X with NVMe accelerator Lustre File System with 6 PB capacity | Cray ClusterStor L300N with NXD flash accelerator Lustre File System with 720 TB capacity |
Compute Nodes | ||
Number of Nodes | 990 | 256 |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148, 2.4 GHz |
Cores per Node | 128 | 40 |
Main Memory | 256 GiB | 192 GiB |
Large Memory Nodes | ||
Number of Nodes | 66 | - |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | - |
Cores per Node | 128 | - |
Main Memory | 1024 GiB | - |
Huge Memory Nodes | ||
Number of Nodes | 5 | - |
CPUs per Node | 2x AMD Milan 7713, 2.0 GHz, , up to 3.675 GHz | - |
Cores per Node | 128 | - |
Main Memory | 2048 GiB | - |
Local Storage | 34 TiB SSD-based memory 12x 3.2 TB NVMe SSDs, ~70 GB/s | - |
GPU Nodes | ||
Number of Nodes | 32 | 18 |
CPUs per Node | 2x AMD Milan 7763, 2.45 GHz, up to 3.5 GHz | 2x Intel Xeon Gold "Skylake" 6148(F), 2.4 GHz |
Cores per Node | 128 | 40 |
Main Memory | 512 GiB | 192 |
Accelerators per Node | 4x NVIDIA A100 with NVLink and 40 GB HBM2 | 2x NVIDIA A40, each 48 GB GDDR6, 10,752 CUDA cores, 336 Tensor cores |
GPU-Development Nodes | ||
Number of Nodes | 1 | - |
CPUs per Node | 2x AMD EPYC Rome 7742, 2.25 GHz, up to 3.4 GHz | - |
Cores per Node | 128 | - |
Main Memory | 1024 GiB | - |
Accelerators per Node | 8x NVIDIA A100 with NVLink and 40GB HBM2 | - |
FPGA Nodes | ||
Number of Nodes | 36 | - |
CPUs per Node | 2x AMD Milan 7713, 2.0 GHz, up to 3.675 GHz | - |
Cores per Node | 128 | - |
Main Memory | 512 GiB | - |
Accelerators per Node | 48x Xilinx Alveo U280 FPGA with 8GiB HBM2 and 32GiB DDR memory 32x Intel Stratix 10 GX 2800 FPGA with 32 GiB DDR memory (Bittware 520N cards) | - |
FPGA-to-FPGA Communication Networks | ||
Optical Switch | CALIENT S320 Optical Circuit Switch (OCS), 320 ports | - |
Ethernet Switch | Huawei Cloudengine CE9860: 128-Port Ethernet Switch | - |
test
Noctua 2 | Noctua 1 | |
---|---|---|
Main Compute | 990 nodes with 2x AMD Milan 7763, 128 cores 256 GiB main memory | 256 nodes with 2x Intel Xeon Gold "Skylake" 6148, 40 cores 192 GiB main memory |
Large Memory | 66 nodes with 1024 GiB main memory | - |
Huge Memory | 5 nodes with 2048 GiB main memory 34 TiB SSD-based memory 12x 3.2 TB NVMe SSDs, ~70 GB/s | - |
Accelerators | ||
GPU | 32 nodes with 4x NVIDIA A100 with NVLink and 40 GB HBM2 1 node with 8x NVIDIA A100 with NVLink and 40GB HBM2 | 18 nodes with 2x NVIDIA A40, each 48 GB GDDR6, 10,752 CUDA cores, 336 Tensor cores |
FPGA | 18 nodes with 3x Xilinx Alveo U280 FPGA with 8GiB HBM2 and 32GiB DDR memory 18 nodes with 2x Intel Stratix 10 GX 2800 FPGA with 32 GiB DDR memory (Bittware 520N cards) Direct and packet-switched FPGA-to-FPGA networking | - |
(copy 1)
test
Noctua 2 | Noctua 1 | |
---|---|---|
CPU | 2x AMD Milan 7763, 128 cores | 2x Intel Xeon Gold 6148, 40 cores |
Normal | 990 nodes: 256 GiB | 256 nodes: 192 GiB |
Large | 66 nodes: 1024 GiB | - |
Huge | 5 nodes: 2048 GiB + SSD | - |
Accelerators | ||
GPU | 128x NVIDIA A100 with 40 GB HBM2 | 36x NVIDIA A40 with 48 GB GDDR6 |
FPGA | 54x Xilinx Alveo U280 36x Intel Stratix 10 GX 2800 FPGA-to-FPGA networking | - |
Network | 100 Gbps InfiniBand | 100 Gbps Omni Path |
Storage | 6 PB Parallel Filesystem | 720 TB Parallel Filesystem |
→ Details | → Details |