VSC-4¶
Note
To compute on VSC-4, see VSC-4 partitions and QOS
VSC-4 was installed in summer 2019 at the Arsenal TU building.
Hardware¶
Partition | Nodes | Architecture | CPU | Cores per CPU (physical/with HT) | GPU | RAM | Use |
---|---|---|---|---|---|---|---|
skylake_0096 | 702 | Intel | 2x Xeon Platinum 8174 | 24/48 | No | 96 GB | The default partition |
skylake_0384 | 78 | Intel | 2x Xeon Platinum 8174 | 24/48 | No | 384 GB | High Memory partition |
skylake_0768 | 12 | Intel | 2x Xeon Platinum 8174 | 24/48 | No | 768 GB | Higher Memory partition |
CPU partition features¶
The new VSC-4 system consists of 790 directly water cooled nodes, each node has 2 Intel Skylake Platinum 8174 processors with 24 cores each, that is 48 physical cores per node and a total of 37,920 cores for the whole system.
The installed Intel Xeon Platinum 8174 Processors are a variant of the Intel Xeon Platinum 8168 Processor with the installed ones having a higher clock rate (with a nominal base frequency of 3.1 GHz and a maximum turbo frequency of 3.9 GHz). The 700 standard nodes have a main memory of 96 GByte, there are 78 fat nodes with 384 GByte of main memory and 12 very fat nodes with 768 GByte, offering a total of 106,368 GByte of main memory. Each node is also equipped with an SSD device of 480 GByte, available as temporary storage during the runtime of a job.
Compute nodes, login nodes and file system nodes are interconnected with a high-speed 100 Gbit/s OmniPath network. The OmniPath network has a two-level fat-tree topology with a blocking factor of 2:1. With its 48 ports, an edge switch connects to 32 compute nodes, giving non-blocking access to 1536 cores. The remaining 16 ports connect via optical fiber cables to the 16 core switches building the second level of the layered fat-tree. In addition there is a 10 Gbit/s Ethernet management network available.
Cooling¶
The compute nodes are directly water cooled allowing to use primary cooling water with a temperature in excess of 43℃, permitting year-round free cooling. Up to 90% of the energy will be removed by this high-temperature loop with the remainder being removed by air cooling. This permits a very reasonable energy foot-print.
The system is complemented with 10 login nodes and parallel file systems.