Skip to content

DTU Sophia hardware

Compute nodes

The Sophia HPC cluster consists of 516 computational nodes of which 484 are 128 GB RAM nodes and 32 are 256 GB RAM nodes. Each node is a powerful x86-64 computer, equipped with 32 physical cores (2 x sixteen-core AMD EPYC 7351).

The parameters are summarized in the following table:

Specs
Primary purpose High Performance Computing
Architecture of compute nodes x86-64
Operating system CentOS 7 Linux
Compute nodes in total 516
Processor 2 x AMD EPYC 7351, 2.9 GHz, 16 cores
RAM (484 nodes) 128 GB, 4 GB per core, DDR4@2666 MHz
RAM (32 nodes) 256 GB, 8 GB per core, DDR4@2666 MHz
Local disk drive no
Compute network / Topology InfiniBand EDR / Fat tree
In total
Total theoretical peak performance (Rpeak) ~384 TFLOPS (516 nodes x 32 cores x 2.9GHz x 8 FLOP/cycle)
Total amount of RAM 69 TB

High-speed interconnect

The nodes are interlinked by InfiniBand and 10 Gbps Ethernet networks.

Sophia's high-speed, low-latency interconnect is Mellanox EDR (100Gbps) Infiniband. Frontend-, compute-, and burst buffer nodes each have Mellanox' ConnectX-5 adapter card installed.

Switch system Count
SB7700 2
SB7790 47

Burst buffer

Hardware product Count
Dell R7425 w/NVMe front-bay 2
Dell Express Flash PM1725a 1.6TB 16
Dell Express Flash PM1725b 1.6TB 4