MPI2007 license: | 13 | Test date: | Aug-2018 |
---|---|---|---|
Test sponsor: | Intel Corporation | Hardware Availability: | Aug-2018 |
Tested by: | Intel Corporation | Software Availability: | Nov-2018 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
121.pop2 | 2560 | 39.2 | 99.2 | 37.2 | 105 | 37.2 | 105 | |||||||
122.tachyon | 2560 | 39.6 | 49.1 | 39.6 | 49.1 | 39.6 | 49.1 | |||||||
125.RAxML | 2560 | 54.1 | 54.0 | 54.0 | 54.0 | 53.9 | 54.2 | |||||||
126.lammps | 2560 | 28.1 | 87.5 | 28.0 | 88.0 | 28.2 | 87.3 | |||||||
128.GAPgeofem | 2560 | 43.8 | 135 | 43.9 | 135 | 43.9 | 135 | |||||||
129.tera_tf | 2560 | 26.1 | 42.2 | 26.0 | 42.3 | 25.9 | 42.4 | |||||||
132.zeusmp2 | 2560 | 28.3 | 75.0 | 28.5 | 74.5 | 29.3 | 72.4 | |||||||
137.lu | 2560 | 20.1 | 209 | 21.2 | 199 | 20.1 | 210 | |||||||
142.dmilc | 2560 | 22.7 | 162 | 22.9 | 161 | 22.8 | 162 | |||||||
143.dleslie | 2560 | 15.8 | 197 | 15.8 | 197 | 16.7 | 185 | |||||||
145.lGemsFDTD | 2560 | 43.3 | 102 | 43.5 | 102 | 43.5 | 101 | |||||||
147.l2wrf2 | 2560 | 58.0 | 142 | 58.0 | 142 | 58.2 | 141 |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | Intel Server System R2208WFTZS |
Interconnect: | Intel Omni-Path 100 series |
File Server Node: | Lustre FS |
Total Compute Nodes: | 64 |
Total Chips: | 128 |
Total Cores: | 2560 |
Total Threads: | 5120 |
Total Memory: | 12 TB |
Base Ranks Run: | 2560 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C++ Composer XE 2018 for Linux Version 18.0.0 Build 20170811 |
C++ Compiler: | Intel C++ Composer XE 2018 for Linux Version 18.0.0 Build 20170811 |
Fortran Compiler: | Intel Fortran Composer XE 2018 for Linux Version 18.0.0 Build 20170811 |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | Intel MPI Library 2019 Build 20180829 |
Other MPI Info: | libfabric-1.6.1 |
Pre-processors: | No |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 64 |
Uses of the node: | Compute |
Vendor: | Intel |
Model: | Intel Server System R2208WFTZS |
CPU Name: | Intel Xeon Gold 6148 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 40 |
Cores per chip: | 20 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.7 GHz |
CPU MHz: | 2400 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 1 MB I+D on chip per core |
L3 Cache: | 27.5 MB I+D on chip per chip |
Other Cache: | None |
Memory: | 192 GB (16 x 12 GB 2Rx4 DDR4-2666) |
Disk Subsystem: | ATA INTEL SSDSC2BA80 |
Other Hardware: | None |
Adapter: | Intel Omni-Path Edge Switch 100 series |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x16 |
Data Rate: | 12.5 GB/s |
Ports Used: | 1 |
Interconnect Type: | Intel Omni-Path Fabric 100 series |
Software | |
---|---|
Adapter: | Intel Omni-Path Edge Switch 100 series |
Adapter Driver: | IFS 10.7 |
Adapter Firmware: | 1.26.1 |
Operating System: | Oracle Linux Server release 7.4 |
Local File System: | Linux/xfs |
Shared File System: | Lustre FS |
System State: | Multi-User |
Other Software: | IBM Platform LSF Standard 9.1.1.1 |
Hardware | |
---|---|
Number of nodes: | 11 |
Uses of the node: | Fileserver |
Vendor: | Intel |
Model: | Intel Server System R2208GZ4GC4 |
CPU Name: | Intel Xeon E5-2680 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 16 |
Cores per chip: | 8 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology up to 3.5 GHz |
CPU MHz: | 2700 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 2 MB I+D on chip per chip |
L3 Cache: | None |
Other Cache: | None |
Memory: | 64 GB per node (8 x 8 GB 1600MHz Reg ECC DDR3) |
Disk Subsystem: | 136 TB 3 RAID with 8 SAS/SATA |
Other Hardware: | None |
Adapter: | Intel Omni-Path Fabric Adapter 100 series |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x16 |
Data Rate: | 12.5 GB/s |
Ports Used: | 1 |
Interconnect Type: | Intel Omni-Path Fabric 100 series |
Software | |
---|---|
Adapter: | Intel Omni-Path Fabric Adapter 100 series |
Adapter Driver: | IFS 10.7 |
Adapter Firmware: | 1.26.1 |
Operating System: | Redhat Enterprise Linux Server Release 7.4 |
Local File System: | None |
Shared File System: | Lustre FS |
System State: | Multi-User |
Other Software: | None |
Hardware | |
---|---|
Vendor: | Intel |
Model: | Intel Omni-Path Fabric 100 series |
Switch Model: | Intel Omni-Path Edge Switch 100 series |
Number of Switches: | 24 |
Number of Ports: | 48 |
Data Rate: | 12.5 GB/s |
Firmware: | 1.26.1 |
Topology: | Fat tree |
Primary Use: | MPI and I/O traffic |
The config file option 'submit' was used.
130.socorro (base): "nullify_ptrs" src.alt was used. 129.tera_tf (base): "add_rank_support" src.alt was used. 143.dleslie (base): "integer_overflow" src.alt was used. MPI startup command: mpiexec.hydra command was used to start MPI jobs. export I_MPI_FABRICS=shm:ofi export FI_PSM2_INJECT_SIZE=8192 export I_MPI_PIN_DOMAIN=core export I_MPI_PIN_ORDER=bunch export FI_PSM2_DELAY=0 export FI_PSM2_LAZY_CONN=1 export I_MPI_COMPATIBILITY=3 Spectre & Meltdown: Kernel: 3.10.0-862.11.6.el7.crt1.x86_64 Microcode: 0x200004d l1tf: Mitigation: PTE Inversion meltdown: Mitigation: PTI spec_store_bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp spectre_v1: Mitigation: Load fences, __user pointer sanitization spectre_v2: Mitigation: IBRS (kernel) BIOS settings: Intel Hyper-Threading Technology (SMT) = Enabled (default is Enabled) Intel Turbo Boost Technology (Turbo) = Enabled (default is Enabled) RAM configuration: Compute nodes have 2x16-GB RDIMM on each memory channel. Network: Endeavour Omni-Path Fabric consists of 48-port switches = 24 core switches connected to each leaf of the rack switch. HFI driver parameters: cache_size = 1024 rcvhdrcnt = 4096 Job placement: Each MPI job was assigned to a topologically compact set of nodes, i.e. the minimal needed number of leaf switches was used for each job = 1 switch for 40/80/160/320/640 ranks, 2 switches for 1280 and 1980 ranks. IBM Platform LSF was used for job submission. It has no impact on performance. Information can be found at: http://www.ibm.com
mpiicc |
126.lammps: | mpiicpc |
mpiifort |
mpiicc mpiifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |
126.lammps: | -DMPICH_IGNORE_CXX_SEEK |