MPI2007 license: | 13 | Test date: | Feb-2012 |
---|---|---|---|
Test sponsor: | Intel Corporation | Hardware Availability: | Mar-2012 |
Tested by: | Pavel Shelepugin | Software Availability: | Sep-2011 |
Benchmark | Base | Peak | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
121.pop2 | 1024 | 114 | 34.0 | 110 | 35.3 | 110 | 35.2 | |||||||
122.tachyon | 1024 | 308 | 6.32 | 107 | 18.2 | 106 | 18.3 | |||||||
125.RAxML | 1024 | 193 | 15.1 | 193 | 15.1 | 194 | 15.1 | |||||||
126.lammps | 1024 | 87.0 | 28.3 | 86.6 | 28.4 | 86.8 | 28.3 | |||||||
128.GAPgeofem | 1024 | 181 | 32.7 | 183 | 32.5 | 182 | 32.6 | |||||||
129.tera_tf | 1024 | 84.6 | 13.0 | 83.4 | 13.2 | 83.4 | 13.2 | |||||||
132.zeusmp2 | 1024 | 65.7 | 32.3 | 64.9 | 32.7 | 62.7 | 33.8 | |||||||
137.lu | 1024 | 60.1 | 69.9 | 59.7 | 70.4 | 60.5 | 69.5 | |||||||
142.dmilc | 1024 | 55.4 | 66.5 | 58.2 | 63.3 | 55.7 | 66.1 | |||||||
143.dleslie | 1024 | 67.5 | 45.9 | 67.5 | 45.9 | 67.2 | 46.1 | |||||||
145.lGemsFDTD | 1024 | 106 | 41.5 | 105 | 41.8 | 105 | 41.9 | |||||||
147.l2wrf2 | 1024 | 219 | 37.4 | 193 | 42.5 | 193 | 42.5 |
Hardware Summary | |
---|---|
Type of System: | Homogeneous |
Compute Node: | Endeavor Node |
Interconnects: | IB Switch Gigabit Ethernet |
File Server Node: | NFS |
Total Compute Nodes: | 64 |
Total Chips: | 128 |
Total Cores: | 1024 |
Total Threads: | 2048 |
Total Memory: | 4 TB |
Base Ranks Run: | 1024 |
Minimum Peak Ranks: | -- |
Maximum Peak Ranks: | -- |
Software Summary | |
---|---|
C Compiler: | Intel C++ Composer XE 2011 for Linux, Version 12.0.5.220 Build 20110719 |
C++ Compiler: | Intel C++ Composer XE 2011 for Linux, Version 12.0.5.220 Build 20110719 |
Fortran Compiler: | Intel Fortran Composer XE 2011 for Linux, Version 12.0.5.220 Build 20110719 |
Base Pointers: | 64-bit |
Peak Pointers: | 64-bit |
MPI Library: | Intel MPI Library 4.0.3.008 for Linux |
Other MPI Info: | None |
Pre-processors: | No |
Other Software: | None |
Hardware | |
---|---|
Number of nodes: | 64 |
Uses of the node: | compute |
Vendor: | Intel |
Model: | R1208GLBPP |
CPU Name: | Intel Xeon E5-2670 |
CPU(s) orderable: | 1-2 chips |
Chips enabled: | 2 |
Cores enabled: | 16 |
Cores per chip: | 8 |
Threads per core: | 2 |
CPU Characteristics: | Intel Turbo Boost Technology disabled, 8.0 GT/s QPI, Hyper-Threading enabled |
CPU MHz: | 2600 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 256 KB I+D on chip per core |
L3 Cache: | 20 MB I+D on chip per chip, 20 MB shared / 8 cores |
Other Cache: | None |
Memory: | 64 GB (8 x 8 GB 2Rx4 PC3-12800R, ECC, running at 1333MHz and CL9) |
Disk Subsystem: | Seagate 600 GB SSD ST9600205SS |
Other Hardware: | None |
Adapter: | Intel (ESB2) 82575EB Dual-Port Gigabit Ethernet Controller |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x8 |
Data Rate: | 1Gbps Ethernet |
Ports Used: | 2 |
Interconnect Type: | Ethernet |
Adapter: | Mellanox MHQH29-XTC |
Number of Adapters: | 1 |
Slot Type: | PCIe x8 Gen2 |
Data Rate: | InfiniBand 4x QDR |
Ports Used: | 1 |
Interconnect Type: | InfiniBand |
Software | |
---|---|
Adapter: | Intel (ESB2) 82575EB Dual-Port Gigabit Ethernet Controller |
Adapter Driver: | e1000 |
Adapter Firmware: | None |
Adapter: | Mellanox MHQH29-XTC |
Adapter Driver: | OFED 1.5.3.1 |
Adapter Firmware: | 2.10.0 |
Operating System: | Red Hat EL 6.1, kernel 2.6.32-131 |
Local File System: | Linux/ext2 |
Shared File System: | NFS |
System State: | Multi-User |
Other Software: | Platform LSF 8.0 |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | fileserver |
Vendor: | Intel |
Model: | S7000FC4UR |
CPU Name: | Intel Xeon CPU |
CPU(s) orderable: | 1-4 chips |
Chips enabled: | 4 |
Cores enabled: | 16 |
Cores per chip: | 4 |
Threads per core: | 2 |
CPU Characteristics: | -- |
CPU MHz: | 2926 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 8 MB I+D on chip per chip, 4 MB shared / 2 cores |
L3 Cache: | None |
Other Cache: | None |
Memory: | 64 GB |
Disk Subsystem: | 8 disks, 500GB/disk, 2.7TB total |
Other Hardware: | None |
Adapter: | Intel 82563GB Dual-Port Gigabit Ethernet Controller |
Number of Adapters: | 1 |
Slot Type: | PCI-Express x8 |
Data Rate: | 1Gbps Ethernet |
Ports Used: | 1 |
Interconnect Type: | Ethernet |
Software | |
---|---|
Adapter: | Intel 82563GB Dual-Port Gigabit Ethernet Controller |
Adapter Driver: | e1000e |
Adapter Firmware: | N/A |
Operating System: | RedHat EL 5 Update 4 |
Local File System: | None |
Shared File System: | NFS |
System State: | Multi-User |
Other Software: | None |
Hardware | |
---|---|
Vendor: | Mellanox |
Model: | Mellanox MTS3600Q-1UNC |
Switch Model: | Mellanox MTS3600Q-1UNC |
Number of Switches: | 46 |
Number of Ports: | 36 |
Data Rate: | InfiniBand 4x QDR |
Firmware: | 7.2.0 |
Topology: | Fat tree |
Primary Use: | MPI traffic |
Hardware | |
---|---|
Vendor: | Force10 Networks |
Model: | Force10 S50, Force10 C300 |
Switch Model: | Force10 S50, Force10 C300 |
Number of Switches: | 15 |
Number of Ports: | 48 |
Data Rate: | 1Gbps Ethernet, 10Gbps Ethernet |
Firmware: | 8.2.1.0 |
Topology: | Fat tree |
Primary Use: | Cluster File System |
The config file option 'submit' was used.
MPI startup command: mpiexec.hydra command was used to start MPI jobs. BIOS settings: Intel Hyper-Threading Technology (SMT): Enabled (default is Enabled) Intel Turbo Boost Technology (Turbo) : Disabled (default is Enabled) RAM configuration: Compute nodes have 2x8-GB RDIMM on each memory channel. Network: Forty six 36-port switches: 18 core switches and 28 leaf switches. Each leaf has one link to each core. Remaining 18 ports on 25 of 28 leafs are used for compute nodes. On the remaining 3 leafs the ports are used for FS nodes and other peripherals. Job placement: Each MPI job was assigned to a topologically compact set of nodes, i.e. the minimal needed number of leaf switches was used for each job: 1 switch for 16/32/64/128/256 ranks, 2 switches for 512 ranks, 4 switches for 1024 ranks, 8 switches for 2048 ranks. Platform LSF was used for job submission. It has no impact on performance. Information can be found at: http://www.platform.com
mpiicc |
126.lammps: | mpiicpc |
mpiifort |
mpiicc mpiifort |
121.pop2: | -DSPEC_MPI_CASE_FLAG |
126.lammps: | -DMPICH_IGNORE_CXX_SEEK |