ABSTRACT
This document provides guidelines required to build,
run, and report on the SPEC OMP2012 benchmarks.
1. Philosophy
1.1 A SPEC OMP2012 Result Is An Observation
1.2 A Published SPEC OMP2012 Result Is a Declaration of Expected Performance
1.3 A SPEC OMP2012 Result is a Claim About Maturity of Performance Methods
1.4 Peak and Base Builds and Runs
1.5 Power Measurements
1.6 Estimates
1.7 About SPEC
1.8 Compliance and Compatibility Commitments
1.9 Usage of the Philosophy Section
2. Building SPEC OMP2012
2.1 Build Procedures
2.2 General Rules for Selecting Compilation Flags
2.3 Base Optimization Rules
2.3 Peak Optimization Rules
3. Running SPEC OMP2012
3.1 System Configuration
3.2 Controlling Benchmark Jobs
3.3 Power Measurement
3.4 Run-time Environment
3.5 Continuous Run Requirement
3.6 Base, peak, and basepeak
3.7 Run-Time Dynamic Optimization
4. Results Disclosure
4.1 Rules regarding availability dates and systems not yet shipped
4.2 Configuration Disclosure
4.3 Test Results Disclosure
4.4 Required Disclosures
4.5 Research and Academic usage of OMP2012
4.6 Fair Use
4.7 Submitting Results to SPEC
5. Run Rule Exceptions
(Click on an item number above, to go to the detailed contents about that item.)
1. Philosophy 1.1. A SPEC OMP2012 Result Is An Observation 1.1.1. Test Methods 1.1.2. Conditions of Observation 1.1.3. Assumptions About the Tester 1.1.4. A SPEC OMP2012 Result is a measurement using OpenMP as the parallel paradigm 1.2. A Published SPEC OMP2012 Result Is a Declaration of Expected Performance 1.2.1. Reproducibility 1.2.2. Obtaining Components 1.2.2.1 Hardware, Operating System and Compilers 1.3. A SPEC OMP2012 Result is a Claim About Maturity of Performance Methods 1.4. Peak and Base Builds and Runs 1.5. Power Measurements 1.6. Estimates 1.7. About SPEC 1.7.1. Publication on SPEC's web site is encouraged 1.7.2. Publication on SPEC's web site is not required 1.7.3. SPEC May Require New Tests 1.7.4. SPEC May Adapt the Suite 1.8. Compliance and Compatibility Commitments 1.8.1. 32- and 64- Bit Systems 1.8.2. Target Languages 1.8.3. Supported Operating Systems 1.8.4. OpenMP Standard 1.9. Usage of the Philosophy Section 2. Building SPEC OMP2012 2.1. Build Procedures 2.1.1 SPEC's tools must be used 2.1.2 The runspec build environment 2.1.3 Continuous Build requirement 2.1.4 Changes to the runspec build environment 2.1.5 Cross-compilation allowed 2.1.6 Individual builds allowed 2.1.7 Tester's assertion of equivalence between build types 2.2 General Rules for Selecting Compilation Flags 2.2.1 Must not use names 2.2.2 Limitations on library substitutions 2.2.3 Limitations on size changes 2.2.4 Portability Flags 2.3 Base Optimization Rules 2.3.1 Safety and Standards Conformance 2.3.2 Same for all benchmarks of a given language 2.3.3 Assertion flags must NOT be used in base 2.3.4 Floating point reordering allowed 2.3.5 Base build environment 2.3.6 Portability Switches for Data Models 2.3.7 Cross-module optimization 2.3.8 Alignment switches are allowed 2.3.9 Feedback-directed optimization 2.3 Peak Optimization Rules 2.3.1 Permitted source code changes |
3. Running SPEC OMP2012 3.1. System Configuration 3.1.1 Operating System State 3.1.2 File Systems and File Servers 3.1.3 Power and Temperature 3.2. Controlling Benchmark Jobs 3.2.1 Number of runs in a reportable result 3.2.2 Number of threads in base 3.2.3 Number of threads in peak 3.2.4 The submit directive 3.3. Power Measurement 3.4. Run-time Environment 3.5. Continuous Run Requirement 3.6. Base, Peak, and Basepeak 3.7. Run-Time Dynamic Optimization 3.7.1 Definitions and Background 3.7.2 RDO Is Allowed, Subject to Certain Conditions 3.7.3 RDO Disclosure and Resources 3.7.4 RDO Settings Cannot Be Changed At Run-time 3.7.5 RDO and safety in base 3.7.6 RDO carry-over by program is not allowed 4. Results Disclosure 4.1. Rules regarding availability dates and systems not yet shipped 4.1.1 Pre-production software can be used 4.1.2 Software Component Names 4.1.3 Specifying Dates 4.1.4 If dates are not met 4.1.5 Performance changes for pre-production systems 4.2. Configuration Disclosure 4.2.1 Identification of System, Manufacturer and Tester 4.2.1.1 Identification of Equivalent Systems 4.2.2 Node Configuration 4.2.3 Software Configuration 4.2.4 Tuning Configuration 4.2.5 Description of Portability and Tuning Options ("Flags File") 4.2.6 Power Measurement Devices 4.2.7 Configuration Disclosure for User Built Systems 4.3. Test Results Disclosure 4.3.1 OMP2012 Performance Metrics 4.3.2 OMP2012 Energy Metrics 4.3.3 Metric Selection 4.3.4 Estimates are allowed 4.3.5 Performance changes for production systems 4.4. Required Disclosures 4.5. Research and Academic usage of OMP2012 4.6. Fair Use 4.7. Submitting Results to SPEC 5. Run Rule Exceptions |
This section is an overview of the purpose, definitions, methods, and assumptions for the SPEC OMP2012 run rules. The purpose of the SPEC OMP2012 benchmark and its run rules is to further the cause of fair and objective benchmarking of high-performance computing systems. The rules help ensure that published results are meaningful, comparable to other results, and reproducible. SPEC believes that the user community benefits from an objective series of tests which serve as a common reference.
Per the SPEC license agreement, all SPEC OMP2012 results disclosed in public -- whether in writing or in verbal form -- must adhere to the SPEC OMP2012 Run and Reporting Rules, or be clearly described as estimates.
A published SPEC OMP2012 result means three things:
A published SPEC OMP2012 result is an empirical report of performance observed when carrying out certain computation- and communication-intensive tasks.
SPEC supplies the OMP2012 benchmarks in the form of source code, which
testers are not allowed to modify except under certain very restricted
circumstances. SPEC OMP2012 includes 14 benchmarks. The SPEC OMP2012
benchmarks are work-based benchmarks. The amount of work is fixed,
and the time used to perform the work is measured.
The tester supplies compilers, and the System Under Test (SUT). In addition, the tester provides a config file, where appropriate optimization flags are set, as well as, where they are needed, portability flags. SPEC provides example config files in the config subtree as well as documentation on how to create a config file in Docs/config.html. SPEC supplies tools which automatically:
The report that certain performance has been observed is meaningful only if the conditions of observation are stated. SPEC therefore requires that a published result include a description of all performance-relevant conditions.
It is assumed that the tester:
The person who actually carries out the test is, therefore, the first and the most important audience for these run rules. The rules attempt to help the tester by trying to be clear about what is and what is not allowed.
The intention of this benchmark suite is to measure performance of applications using OpenMP as the means to implement parallel computation. Hybrid models of parallelism which, in addition to OpenMP, use explicit thread-level parallelism (such as automatic parallelization compiler features, multi-threaded math or scientific libraries, or MPI calls) to perform computation, are not allowed.
A published SPEC OMP2012 result is a declaration that the observed level of performance can be obtained by others. Such declarations are widely used by vendors in their marketing literature, and are expected to be meaningful to ordinary customers.
It is expected that later testers can obtain a copy of the SPEC OMP2012 suite, obtain the components described in the original result, and reproduce the claimed performance, within a small range to allow for run-to-run variation.
Therefore, it is expected that the components used in a published result can in fact be obtained, with the level of quality commonly expected for products sold to ordinary customers. Different components are subject to different standards, described below:
Subcomponents are required to:
The judgment of whether a component meets the above list may sometimes pose difficulty, and various references are given in these rules to provide guidelines for such judgment. But by way of introduction, imagine a vendor-internal version of a compiler, designated only by an internal code name, unavailable to customers, which frequently generates incorrect code. Such a compiler would fail to provide a suitable environment for general programming, and would not be ready for use in a SPEC OMP2012 result.
A published SPEC OMP2012 result carries an implicit claim that the performance methods it employs are more than just "prototype" or "experimental" or "research" methods; it is a claim that there is a certain level of maturity and general applicability in its methods. Unless clearly described as an estimate, a published SPEC result is a claim that the performance methods employed (whether hardware or software, compiler, or other):
SPEC is aware of the importance of optimizations in producing the best performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks, versus optimizations that exclusively target the SPEC benchmarks. However, with the list above, SPEC wants to increase awareness of implementers and end users to issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.
The tester must describe the performance methods that are used in terms that a performance-aware user can follow, so that users can understand how the performance was obtained and can determine whether the methods may be applicable to their own applications. The tester must be able to make a credible public claim that a class of applications in the real world may benefit from these methods.
"Peak" metrics may be produced by building each benchmark in the suite with a set of optimizations individually selected for that benchmark, and running them with environment settings individually selected for that benchmark. The optimizations selected must adhere to the set of general benchmark optimization rules described in section 2.2 below. This may also be referred to as "aggressive compilation".
"Base" metrics must be produced by building all the benchmarks in the suite with a common set of optimizations, and running them with environment settings common to all the benchmarks in the suite. In addition to the general benchmark optimization rules (section 2.2), base optimizations must adhere to a stricter set of rules described in section 2.3.
These additional rules serve to form a "baseline" of performance that can be obtained with a single set of compiler switches, single-pass make process, and a high degree of portability, safety, and performance.
Rules for building the benchmarks are described in section 2.
Power measurements may be produced. The system configuration used for the measurement of power must be in accordance with Section 3.1.3. The power measurement itself must be in accordance with Section 3.3 and the SPEC Power and Performance Benchmark Methodology, version 2.1 (08/17/2011).
Note, currently DC power voltage sources are not supported by the power measurement framework. Contact SPEC HPG for discussions and review if you wish to propose adding DC line voltage source. Contact information may be found via the SPEC web site, http://www.spec.org.
SPEC OMP2012 metrics may be estimated, with the exception of the power measurement. All estimates must be clearly designated as such.
This philosophy section has described how a "result" has certain characteristics: e.g. a result is an empirical report of performance, includes a full disclosure of performance-relevant conditions, can be reproduced, uses mature performance methods. By contrast, estimates may fail to provide one or even all of these characteristics.
Nevertheless, estimates have long been seen as valuable for SPEC benchmarks. Estimates are set at inception of a new chip design and are tracked carefully through analytic, simulation, and HDL (Hardware Description Language) models. They are validated against prototype hardware and, eventually, production hardware. With chip designs taking years, and requiring very large investments, estimates are central to corporate roadmaps. Such roadmaps may compare SPEC OMP2012 estimates for several generations of processors, and, explicitly or by implication, contrast one company's products and plans with another's.
SPEC wants the OpenMP benchmarks to be useful, and part of that usefulness is allowing the metrics to be estimated.
The key philosophical point is simply that estimates must be clearly distinguished from results.
SPEC encourages the review of OMP2012 results by the relevant subcommittee, and subsequent publication on SPEC's web site (http://www.spec.org/omp2012). SPEC uses a peer-review process prior to publication, in order to improve consistency in the understanding, application, and interpretation of these run rules.
Review by SPEC is not required. Testers may publish rule-compliant results independently. No matter where published, all results publicly disclosed must adhere to the SPEC Run and Reporting Rules, or be clearly marked as estimates. SPEC may take action if the rules are not followed.
Any public use of SPEC OMP2012 results must, at the time of publication, adhere to the then-currently-posted version of SPEC's Fair Use Rules, http://www.spec.org/fairuse.html.
In cases where it appears that the run rules have not been followed, SPEC may investigate such a claim and require that a result be regenerated, or may require that the tester correct the deficiency (e.g. make the optimization more general purpose or correct problems with code generation).
The SPEC High Performance Group reserves the right to adapt the SPEC OMP2012 suite as it deems necessary to preserve its goal of fair benchmarking. Such adaptations might include (but are not limited to) removing benchmarks, modifying codes or workloads, adapting metrics, republishing old results adapted to a new metric, or requiring retesting by the original tester.
The OMP2012 benchmarks and data sets are intended to use upto 32 GB of memory, for sufficient numbers of threads. It is unlikely to run in 32-bit address spaces on most systems. SPEC is aware that some systems that are commonly described as "32-bit" may provide a smaller number of bits to user applications, for example if one or more bits are reserved to privileged code. SPEC is also aware that there are many ways to spend profligate amounts of virtual memory. Therefore, although 32-bit systems are within the design center for the OMP2012 suite, SPEC does not guarantee any particular memory size for the benchmarks, nor that they will necessarily fit on all systems that are described as 32-bit.
While the benchmarks have been tested extensively as 64-bit binaries on a range of systems, you are welcome to run them as 32-bit binaries subject to the restrictions in sections 2.2.3, 2.2.4 and 2.3.6. The SPEC HPG committee is unlikely to accommodate any source-code changes enabling a benchmark to run as a 32-bit binary.
The SPEC OMP2012 benchmarks are written in Fortran, C and C++. If benchmarks fail due to non-compliance with the appropriate Language Standard, the SPEC HPG Committee will be inclined to approve performance-neutral source-code changes.
The SPEC OMP2012 benchmarks have been tested on Linux/UNIX systems. It is not our intent to exclude them from working on other platforms. The burden of porting the benchmarks and tools to other operating systems is likely to fall on you, however, if you decide to submit results. Section 5 provides for exceptional cases where the standard run- and submission- rules cannot be followed.
The SPEC OMP2012 suite is written to comply with the OpenMP Standard 3.0. If benchmarks fail due to non-compliance with the OpenMP Standard, the SPEC HPG Committee will be inclined to approve performance-neutral source-code changes. In cases where the library is non-compliant or imposes some fundamental limitation, the SPEC HPG Committee is inclined to advocate fixing the library rather than accept changes to the benchmark source.
This philosophy section is intended to introduce concepts of fair benchmarking. It is understood that in some cases, this section uses terms that may require judgment, or which may lack specificity. For more specific requirements, please see the sections below.
In case of a conflict between this philosophy section and a run rule in one of the sections below, normally the run rule found below takes priority.
Nevertheless, there are several conditions under which questions should be resolved by reference to this section: (a) self-conflict: if rules below are found to impose incompatible requirements; (b) ambiguity: if they are unclear or silent with respect to a question that affects how a result is obtained, published, or interpreted; (c) obsolescence: if the rules below are made obsolete by changing technical circumstances or by directives from superior entities within SPEC.
When questions arise as to interpretation of the run rules:
SPEC has adopted a set of rules defining how the SPEC OMP2012 benchmark suite must be built and run to produce peak and base metrics.
With the release of the SPEC OMP2012 suite, a set of tools based on GNU Make and Perl5 are supplied to build and run the benchmarks. To produce publication-quality results, these SPEC tools must be used. This helps ensure reproducibility of results by requiring that all individual benchmarks in the suite are run in the same way and that a configuration file is available that defines the optimizations used.
The primary tool is called runspec (runspec.bat for Microsoft Windows). It is described in the runspec documentation in the Docs subdirectory of the SPEC root directory -- in a Bourne shell that would be called ${SPEC}/Docs/, or on Microsoft Windows %SPEC%\Docs\.
Some Fortran programs need to be preprocessed, for example to choose variable sizes depending on whether -DSPEC_LP64 has been set. Fortran preprocessing must be done using the SPEC-supplied preprocessor, even if the vendor's compiler has its own preprocessor. The runspec tool will automatically enforce this requirement by invoking the SPEC preprocessor.
SPEC supplies pre-compiled versions of the tools for a variety of platforms. If a new platform is used, please see tools-build[.html] in the Docs directories for information on how to build the tools, and how to obtain approval for them. SPEC's approval is required for the tools build, so a log must be generated during the build.
For more complex ways of compilation, SPEC has provided hooks in the tools so that such compilation and execution is possible (see the tools documentation for details). If, for some reason, building and running with the SPEC tools does not work for your environment, the test sponsor may ask for permission to use performance-neutral alternatives (see section 5).
When runspec is used to build the SPEC OMP2012 benchmarks, it must be used in generally available, documented, and supported environments (see section 1), and any aspects of the environment that contribute to performance must be disclosed to SPEC (see section 4).
On occasion, it may be possible to improve run time performance by environmental choices at build time. For example, one might install a performance monitor, turn on an operating system feature such as BIGPAGES, or set an environment variable that causes the cc driver to invoke a faster version of the linker.
It is difficult to draw a precise line between environment settings that are reasonable versus settings that are not. Some settings are obviously not relevant to performance (such as hostname), and SPEC makes no attempt to regulate such settings. But for settings that do have a performance effect, for the sake of clarity, SPEC has chosen that:
As described in section 1, it is expected that testers can reproduce other testers' results. In particular, it must be possible for a new tester to compile both the base and peak benchmarks for an entire suite (to measure both SPECompG_base2012 and SPECompG_peak2012) in one execution of runspec, with appropriate command line arguments and an appropriate configuration file, and obtain executable binaries that are (from a performance point of view) equivalent to the binaries used by the original tester.
The simplest and least error-prone way to meet this requirement is for the original tester to take production hardware, production software, a SPEC config file, and the SPEC tools and actually build the benchmarks in a single invocation of runspec on the System Under Test (SUT). But SPEC realizes that there is a cost to benchmarking and would like to address this, for example through the rules that follow regarding cross-compilation and individual builds. However, in all cases, the tester is taken to assert that the compiled executables will exhibit the same performance as if they all had been compiled with a single invocation of runspec (see 2.1.7).
SPEC OMP2012 base binaries must be built using the environment rules of section 2.1.2, and must not rely upon any changes to the environment during the build.
Note 1: Base cross compilations using multiple hosts are allowed (2.1.5), but the performance of the resulting binaries must not depend upon environmental differences among the hosts. It must be possible to build performance-equivalent base binaries with one set of switches (2.3.1), in one execution of runspec (2.1.3), on one host, with one environment (2.1.2).
For a peak build, the environment may be changed, subject to the following constraints:
Note 2: Peak cross compilations using multiple hosts are allowed (2.1.5), but the performance of the resulting binaries must not depend upon environmental differences among the hosts. It must be possible to build performance-equivalent peak binaries with one config file, in one execution of runspec (2.1.3), in the same execution of runspec that built the base binaries, on one host, starting from the environment used for the base build (2.1.2), and changing that environment only through config file hooks (2.1.4).
It is permitted to use cross-compilation, that is, a building process where the benchmark executables are built on a host (or hosts) that differ(s) from the SUT. The runspec tool must be used on all systems (typically with -a build on the host(s) and -a validate on the SUT).
If all systems belong to the same product family and if the software used to build the executables is available on all systems, this does not need to be documented. In the case of a true cross compilation, (e.g. if the software used to build the benchmark executables is not available on the SUT, or the host system provides performance gains via specialized tuning or hardware not on the SUT), the host system(s) and software used for the benchmark building process must be documented in the Notes section. See section 4.
It is permitted to use more than one host in a cross-compilation. If more than one host is used in a cross-compilation, they must be sufficiently equivalent so as not to violate rule 2.1.3. That is, it must be possible to build the entire suite on a single host and obtain binaries that are equivalent to the binaries produced using multiple hosts.
The purpose of allowing multiple hosts is so that testers can save time when recompiling many programs. Multiple hosts must NOT be used in order to gain performance advantages due to environmental differences among the hosts. In fact, the tester must exercise great care to ensure that any environment differences are performance neutral among the hosts, for example by ensuring that each has the same version of the operating system, the same performance software, the same compilers, and the same libraries. The tester must exercise due diligence to ensure that differences that appear to be performance neutral - such as differing MHz or differing memory amounts on the build hosts - are in fact truly neutral.
Multiple hosts must NOT be used in order to work around system or compiler incompatibilities (e.g. compiling the C benchmarks on a different OS version than the Fortran benchmarks in order to meet the different compilers' respective OS requirements), since that would violate the Continuous Build rule (2.1.3).
It is permitted to build the benchmarks with multiple invocations of runspec, for example during a tuning effort. But, the executables must be built using a consistent set of software. If a change to the software environment is introduced (for example, installing a new version of the C compiler which is expected to improve the performance of one of the floating point benchmarks), then all affected benchmarks must be rebuilt (in this example, all the C benchmark).
The previous 4 rules may appear to contradict each other (2.1.3 through 2.1.6), but the key word in 2.1.3 is the word "possible".
Consider the following sequence of events:
In this example, the tester is taken to be asserting that the above sequence of events produces binaries that are, from a performance point of view, equivalent to binaries that it would have been possible to build in a single invocation of the tools.
- A tester has built a complete set of OMP2012 executable images ("binaries") on her usual host system.
- A hot new SUT arrives for a limited period of time. It has no compilers installed.
- A SPEC OMP2012 tree is installed on the SUT, along with the binaries and config file generated on the usual host.
- It is learned that performance could be improved if the peak version of 999.sluggard were compiled with -O5 instead of -O4.
- On the host system, the tester edits the config file to change to -O5 for 999.sluggard, and issues the command:
runspec -c myconfig -D -a build -T peak sluggard- The tester copies the new binary and config file to the SUT
- A complete run is started by issuing the command:
runspec -c myconfig -a validate all- Performance is as expected, and the results are published at SPEC (including the config file).
If there is some optimization that can only be applied to individual benchmark builds, but which it is not possible to apply in a continuous build, the optimization must not be used.
Rule 2.1.7 is intended to provide some guidance about the kinds of practices that are reasonable, but the ultimate responsibility for result reproducibility lies with the tester. If the tester is uncertain whether a cross-compile or an individual benchmark build is equivalent to a full build on the SUT, then a full build on the SUT is required (or, in the case of a true cross-compilation which is documented as such, then a single runspec -a build is required on a single host.) Although full builds add to the cost of benchmarking, in some instances a full build in a single runspec may be the only way to ensure that results will be reproducible.
The following rules apply to compiler flag selection for SPEC OMP2012 Peak and Base Metrics. Additional rules for Base Metrics follow in section 2.3.
Benchmark source file, variable, and subroutine names must not be used within optimization flags or compiler/build options.
Identifiers used in preprocessor directives to select alternative source code are also forbidden, except for a rule-compliant library substitution (2.2.2), an approved portability flag (2.2.4), or a specifically provided SPEC-approved alternate source (src.alt).
For example, if a benchmark source code uses one of:
#ifdef IDENTIFIER #ifndef IDENTIFIER #if defined IDENTIFIER #if !defined IDENTIFIER
to provide alternative source code under the control of a compiler option such as -DIDENTIFIER, such a switch may not be used unless it meets the criteria of 2.2.2 or 2.2.4.
Flags which substitute pre-computed (e.g. library-based) routines for routines defined in the benchmark on the basis of the routine's name must not be used. Exceptions are:
Note: This rule does not forbid flags that select alternative implementations of library functions defined in an ANSI/ISO language standard or the OpenMP language standard. For example, such flags might select an optimized library of these functions, or allow them to be inlined.
Flags that change a data type size to a size different from the default size of the compilation system are not allowed to be used in base builds. Exceptions are:
which are acceptable as portability flags in base builds, and may be used as (or to facilitate) optimizations in peak builds.
Pointer size changes, in particular, may be used to make the benchmark binaries execute within a 32-bit or 64-bit address space. Section 2.3.6 states the restrictions on using mixtures of 32-bit and 64-bit binaries in base measurements.
Rule 2.3.2 requires that all benchmarks use the same flags in base. Portability flags are an exception to this rule: they may differ from one benchmark to another, even in base. Such flags are subject to two major requirements:
The initial published results for OMP2012 will include a reviewed set of portability flags on several operating systems; later users who propose to apply additional portability flags must prepare a justification for their use.
A proposed portability flag will normally be approved if one of the following conditions holds:
A proposed portability flag will normally not be approved unless it is essential in order to successfully build and run the benchmark.
If more than one solution can be used for a problem, the subcommittee will review attributes such as precedent from previously published results, performance neutrality, standards compliance, amount of code affected, impact on the expressed original intent of the program, and good coding practices (in rough order of priority).
If a benchmark is discovered to violate the relevant standard, that may or may not be reason for the subcommittee to grant a portability flag. If the justification for a portability flag is standards compliance, the tester must include a specific reference to the offending source code module and line number, and a specific reference to the relevant sections of the appropriate standard. The tester should also address impact on the other attributes mentioned in the previous paragraph.
If a given portability problem (within a given language) occurs in multiple places within a suite, then, in base, the same method(s) must be applied to solve all instances of the problem.
If a library is specified as a portability flag, SPEC may request that the table of contents of the library be included in the disclosure.
In addition to the rules listed in section 2.2 above, the selection of optimizations to be used to produce SPEC OMP2012 Base Metrics includes the following:
The optimizations used are expected to be safe, and it is expected that system or compiler vendors would endorse the general use of these optimizations by customers who seek to achieve good application performance.
The requirements that optimizations be safe, and that they generate correct code for a class of programs larger than the suite itself (rule 1.4), are normally interpreted as requiring that the system, as used in base, implement the language correctly. "The language" is defined by the appropriate ANSI/ISO standard (C99, Fortran-95, C++ 98).
The principle of standards conformance is not automatically applied, because SPEC has historically allowed certain exceptions:
Otherwise, a deviation from the standard that is not performance neutral, and that gives the particular implementation a OMP2012 performance advantage over standard-conforming implementations, is considered an indication that the requirements about "safe" and "correct code" optimizations are probably not met. Such a deviation may be a reason for SPEC to find a result not rule-conforming.
If an optimization causes any SPEC OMP2012 benchmark to fail to validate, and if the relevant portion of this benchmark's code is within the language standard, then the failure is taken as additional evidence that an optimization is not safe.
Regarding C++: Note that for C++ applications, the standard calls for support of both run-time type information (RTTI) and exception handling. The compiler, as used in base, must enable these.
For example, a compiler enables exception handling by default; it can be turned off with --noexcept. The switch --noexcept is not allowed in base.
For example, a compiler defaults to no run time type information, but allows it to be turned on via --rtti. The switch --rtti must be used in base.
Regarding accuracy: Because language standards generally do not set specific requirements for accuracy, SPEC has also chosen not to do so. Nevertheless:
In cases where the class of appropriate applications appears to be so narrowly drawn as to constitute a "benchmark special", that may be a reason for SPEC to find a result non-conforming.
In base, the same compiler must be used for all modules of a given language within a benchmark suite. Except for portability flags (see 2.2.4 above), all flags or options that affect the transformation process from SPEC-supplied source to completed executable must be the same, including but not limited to:
All flags must be applied in the same order for all compiles of a given language.
Note that the SPEC tools provide methods to set flags on a per-language basis.
For example, if a tester sets:
default=base: COPTIMIZE = -O4 FOPTIMIZE = -O5
then the C benchmarks will be compiled with -O4 and the Fortran benchmarks with -O5. (This is legal: there is no requirement to compile C codes with the same optimization level as Fortran codes).
Regarding benchmarks that have been written in more than one language:
In a mixed-language benchmark, the tools automatically compile each source module with the options that have been set for its language.
Continuing the example just above, a benchmark that uses both C and Fortran would have its C modules compiled with -O4 and its Fortran modules with -O5. This, too, is legal.
In order to link an executable for a mixed-language benchmark, the tools need to decide which link options to apply (e.g. those defined in CLD/CLDOPT vs. those in FLD/FLDOPT vs. those in CXXLD/CXXLDOPT). This decision is based on benchmark classifications that were determined during development of OMP2012. For reasons of link time library inclusion, the classifications were not made based on percentage of code nor on the language of the main routine; rather, the classifications had been set to either F (for mixed Fortran/C benchmarks) or CXX (for benchmarks that include C++).
Link options must be consistent in a base build. For example, if FLD is set to /usr/opt/advanced/ld for pure Fortran benchmarks, the same setting must be used for any mixed language benchmarks that have been classified, for purpose of linking, as Fortran.
Inter-module optimization and mixed-language benchmarks:
For mixed-language benchmarks, if the compilers have an incompatible inter-module optimization format, flags that require inter-module format compatibility may be dropped from base optimization of mixed-language benchmarks. The same flags must be dropped from all benchmarks that use the same combination of languages. All other base optimization flags for a given language must be retained for the modules of that language.
For example, suppose that a suite has exactly two benchmarks that employ both C and Fortran, namely 997.CFmix1 and 998.CFmix2. A tester uses a C compiler and Fortran compiler that are sufficiently compatible to be able to allow their object modules to be linked together - but not sufficiently compatible to allow inter-module optimization. The C compiler spells its intermodule optimization switch -ifo, and the Fortran compiler spells its switch --intermodule_optimize. In this case, the following would be legal:
default=base: COPTIMIZE = -fast -O4 -ur=8 -ifo FOPTIMIZE = --prefetch:all --optimize:5 --intermodule_optimize FLD=/usr/opt/advanced/ld FLDOPT=--nocompress --lazyload --intermodule_optimize 997.CFmix1,998.CFmix2=base: COPTIMIZE = -fast -O4 -ur=8 FOPTIMIZE = --prefetch:all --optimize:5 FLD=/usr/opt/advanced/ld FLDOPT=--nocompress --lazyload
Following the precedence rules as explained in config.html, the above section specifiers set default tuning for the C and Fortran benchmarks, but the tuning is modified for the two mixed-language benchmarks to remove switches that would have attempted inter-module optimization.
An assertion flag is one that supplies semantic information that the compilation system did not derive from the source statements of the benchmark.
With an assertion flag, the programmer asserts to the compiler that the program has certain nice properties that allow the compiler to apply more aggressive optimization techniques (for example, that there is no aliasing via C pointers). The problem is that there can be legal programs (possibly strange, but still standard-conforming programs) where such a property does not hold. These programs could crash or give incorrect results if an assertion flag is used. This is the reason why such flags are sometimes also called "unsafe flags". Assertion flags should never be applied to a production program without previous careful checks; therefore they must not be used for base.
Exception: a tester is free to turn on a flag that asserts that the benchmark source code complies to the relevant standard (e.g. -ansi_alias). Note, however, that if such a flag is used, it must be applied to all compiles of the given language (C, C++, or Fortran), while still passing SPEC's validation tools with correct answers for all the affected programs.
Base results may use flags which affect the numerical accuracy or sensitivity by reordering floating-point operations based on algebraic identities, provided of course that the result validates.
The system environment must not be manipulated during a build of the base binaries. For example, suppose that an environment variable called BIGPAGES can be set to yes or no, and the default is no. The tester must not change the choice during the build of the base binaries. See section 2.1.4.
Normally, it is expected that the data model (such as pointer sizes, sizes of int, etc) will be consistent in base for all compilations of a given language. In particular, compilers provide switches like -m64, and several of the benchmark source codes supply -DSPEC_LP64, -DSPEC_P64, and/or -DSPEC_ILP64 to select between data declarations of different sizes. If one of these flags is used in base, then normally it should be set for all benchmarks of the given language in the suite for base.
If for some reason it is not practical to use a consistent data model in base, the following rules apply:
If no consistent combination of benchmark code switches and rule 1 can be found to work, possibly due to mixed-language benchmarks in the suite, the tester could describe the problem to SPEC and request that SPEC allow use of an inconsistent data model in base. SPEC would consider such a request using the same process outlined in rule 2.2.4, including consideration of the technical arguments as to the nature of the data model problem and consideration of the practicality of technical alternatives, if any. SPEC might or might not grant the portability flag. SPEC might also choose to fix source code limitations, if any, that are causing difficulty.
Frequently, performance may be improved via optimizations that work across source modules, for example -ifo, -xcrossfile, or -IPA. Some compilers may require the simultaneous presentation of all source files for inter-file optimization, as in:
cc -ifo -o a.out file1.c file2.c
Other compilers may be able to do cross-module optimization even with separate compilation, as in:
cc -ifo -c -o file1.o file1.c cc -ifo -c -o file2.o file2.c cc -ifo -o a.out file1.o file2.o
By default, the SPEC tools operate in the latter mode, but they can be switched to the former through the config file option ONESTEP=yes.
ONESTEP is not allowed in base. Cross-module optimization without the use of ONESTEP is allowed in base.
Switches that cause data to be aligned on natural boundaries may be used in base.
Feedback-directed optimization is not allowed.
SPEC OMP2012 allows source code modifications for peak runs. Changes to the directives and source are permitted to facilitate generally useful and portable optimizations, with a focus on improving scalability. Changes in algorithm are not permitted. Vendor unique extensions to OpenMP are allowed, if they are portable.
Examples of compiler flags that are allowed are as follows:
Qualifications for permitted optimizations include:
Examples of permitted source code modifications and optimizations are as follows:
Examples of optimizations or source code modifications that are not permitted are as follows:
Full source and a written report of the nature and justification of the source changes is required with any peak submission having source changes. These reports will be made public on the SPEC website.
Source code added by a vendor is expected to be portable to other compilers and architectures. In particular, source code is required to run on at least one other compiler/run-time library/architecture other than the platform of the vendor.
All source code changes are subject to review by the HPG committee.
Source code modifications are protected by a 6 week publication window. That is, a period of 6 weeks after the publication of results based on a set of source code changes during which results based on the same source code modification or technique not approved by the tester may not be published.
The operating system state (multi-user, single-user, init level N) may be selected by the tester. This state along with any changes in the default configuration of daemon processes or system tuning parameters must be documented in the notes section of the results disclosure. (For Microsoft Windows, system state is normally "Default"; a list of services that are shut down should be provided, if any, e.g. print spooler shut down).
SPEC OMP2012 requires that a single file system be used to contain the installed directory tree. Additional file systems may be used to store temporary and run directories. A single shared run-directory must be used for each benchmark in a base run. Peak runs are allowed to replicate run directories, and the directories and file systems can be arranged differently for different benchmarks.
SPEC allows any type of file system (disk-based, memory-based, NFS, DFS, FAT, NTFS etc.) to be used. The type and arrangement of the directories and file systems must be disclosed in reported results.
Line Voltage Source
The line voltage source used for measurements is the main AC power as provided by local utility companies. Power generated from other sources often has unwanted harmonics which are incapable of being measured correctly by many power analyzers, and thus would generate inaccurate results.
The usage of an uninterruptible power source (UPS) as the line voltage source is allowed, but the voltage output must be a pure sine-wave. This usage must be specified in the Notes section.
For situations in which the appropriate voltages are not provided by local utility companies (e.g. measuring a server in the United States which is configured for European markets, or measuring a server in a location where the local utility line voltage does not meet the required characteristics), an AC power source may be used, and the power source must be specified in the notes section of the disclosure report. In such situation the following requirements must be met, and the relevant measurements or power source specifications disclosed in the notes section of the disclosure report:
The intent is that the AC power source does not interfere with measurements such as the power factor by trying to adjust its output power to improve the power factor of the load.
Environmental Conditions
SPEC requires that power measurements be taken in an environment representative of the majority of usage environments. The intent is to discourage extreme environments that may artificially impact power consumption or performance of the server.
SPEC OMP2012 requires the following environmental conditions to be met:
If air cooling is used, then the ambient temperature range must be 20°C or above. If a different cooling method is used, then the temperature range is unspecified.
Elevation: within documented operating specification of SUT
Humidity: within documented operating specification of SUT
Power Analyzer Setup
The power analyzer must be located between the AC line voltage source and the SUT. No other active components are allowed between the AC line voltage source and the SUT.
Power analyzer configuration settings that are set by SPEC PTDaemon must not be manually overridden.
Power Analyzer Specifications
The power analyzer needs to have been calibrated in the last 12 months.
To ensure comparability and repeatability of power measurements, SPEC requires the following attributes for the power measurement device used during the benchmark. Please note that a power analyzer may meet these requirements when used in some power ranges but not in others, due to the dynamic nature of power analyzer accuracy and crest factor. The usage of power analyzer’s auto-ranging function is discouraged.
Uncertainty and Crest Factor
Measurements - the analyzer must report true RMS power (watts), voltage, amperes and power factor.
Uncertainty - Measurements must be reported by the analyzer with an overall uncertainty of 1% or less for the ranges measured during the benchmark run. Overall uncertainty means the sum of all specified analyzer uncertainties for the measurements made during the benchmark run.
Calibration - the analyzer must be able to be calibrated by a standard traceable to NIST (U.S.A.) (http://nist.gov) or a counterpart national metrology institute in other countries. The analyzer must have been calibrated within the past year.
Crest Factor - The analyzer must provide a current crest factor of a minimum value of 3. For analyzers which do not specify the crest factor, the analyzer must be capable of measuring an amperage spike of at least 3 times the maximum amperage measured during any 1-second sample of the benchmark run.
Logging - The analyzer must have an interface that allows its measurements to be read by the SPEC PTDaemon. The reading rate supported by the analyzer must be at least 1 set of measurements per second, where set is defined as watts and at least 2 of the following readings: volts, amps and power factor. The data averaging interval of the analyzer must be either 1 (preferred) or 2 times the reading interval. "Data averaging interval" is defined as the time period over which all samples captured by the high-speed sampling electronics of the analyzer are averaged to provide the measurement set.
An analyzer with a vendor-specified uncertainty of +/- 0.5% of reading +/- 4 digits, used in a test with a maximum wattage value of 200W, would have "overall" uncertainty of (((0.5%*200W)+0.4W)=1.4W/200W) or 0.7% at 200W.
An analyzer with a wattage range 20-400W, with a vendor-specified uncertainty of +/- 0.25% of range +/- 4 digits, used in a test with a maximum wattage value of 200W, would have "overall" uncertainty of (((0.25%*400W)+0.4W)=1.4W/200W) or 0.7% at 200W.
Temperature Sensor Specifications
Temperature must be measured no more than 50mm in front of (upwind of) the main airflow inlet of the SUT. To ensure comparability and repeatability of temperature measurements, SPEC requires the following attributes for the temperature measurement device used during the benchmark:
Logging - The sensor must have an interface that allows its measurements to be read by the benchmark harness. The reading rate supported by the sensor must be at least 4 samples per minute.
Accuracy - Measurements must be reported by the sensor with an overall accuracy of +/- 0.5 degrees Celsius or better for the ranges measured during the benchmark run.
Supported and Compliant Devices
A reportable run consists of three runs of the suite. The reportable result will be the median of these three runs.
For SPECompG_base2012 measurements, the tester must select a single value to use as the number of threads to be applied to all benchmarks in the suite.
For SPECompG_peak2012, the tester is free to choose the number of threads for each individual benchmark independently of the other benchmarks, and this number may be less than, equal to, or greater than the number of threads specified for base.
The config file directive submit is the preferred means to assign work to processors. The tester may, if desired:
The submit directive can be used to change the run time environment (see section 3.4). In addition, if a testbed description is referenced by a submit directive, the same description must be used by all benchmarks in a base run. This means that in base, the submit directive may only differ between benchmarks in the suite for portability reasons.
In peak, different benchmarks may use different submit directives.
The system configuration used for the measurement of power must be in accordance with Section 3.1.3 and the power measurement itself must be in accordance with the SPEC Power and Performance Benchmark Methodology, version 2.1 (08/17/2011). The SPEC OMP2012 benchmark tool set provides the ability to automatically gather measurement data from supported power analyzers and temperature sensors and integrate that data into the benchmark result. SPEC requires that the analyzers and sensors used in a submission be supported by the measurement framework, and be compliant with the specifications in Section 3.1.3. The tools provided by SPEC OMP2012 for power measurement (namely PTDaemon), or a more recent version provided by SPECpower must be used to run and produce measured SPEC OMP2012 results. SPEC OMP2012 includes PTDaemon version 1.4.2. For the latest version of the PTDaemon and for the list of accepted measurement devices, see this page.
Run-time environment settings are treated similarly to compilation options in SPEC OMP2012. The rules are as follows, from highest precedence to lowest:
submit = export MP_SPIN=wait; ....Settings are hard to discern if too many details are packed into the submit line. One advantage, however, is that changing the settings does not cause anything to rebuild.
env_vars = 1 ENV_MP_SPIN = waitThe settings are quite clear from reading the text of the config file. A disadvantage is that the settings also apply to the build phase, and changing a setting will cause the affected benchmarks to rebuild.
export MP_SPIN=... runspec ...The settings are invisible to the automatic report generation and must be carefully documented.
These SPEC OMP2012 run-time environment rules are consistent with MPI2007 but deviate from CPU2000 and CPU2006 because environment settings affecting the OpenMP and MPI libraries play an essential role in application performance.
All benchmark executions, including the validation steps, contributing to a particular submittable report must occur continuously, that is, in one execution of runspec. For a reportable run, the runspec tool will run all three workloads (test, train, and ref), and will ensure that the correct answer is obtained for all three. (Note: the execution and validation of test and train is not part of the timing of the benchmark - it is only an additional test for correct operation of the binary.)
If a submittable report will contain both base and peak measurements, a single runspec invocation must be used for the runs. When both base and peak are run, the tools run the base executables first, followed by the peak executables.
It is permitted to publish base results as peak. This can be accomplished in various ways, all of which are allowed:
Note: It is permitted but not required to compile in the same runspec invocation as the execution. See Section 2.1.5 regarding cross compilation.
As used in these run rules, the term "run-time dynamic optimization" (RDO) refers broadly to any method by which a system adapts to improve performance of an executing program based upon observation of its behavior as it runs. This is an intentionally broad definition, intended to include techniques such as:
RDO may be under control of hardware, software, or both.
Understood this broadly, RDO is already commonly in use, and usage can be expected to increase. SPEC believes that RDO is useful, and does not wish to prevent its development. Furthermore, SPEC views at least some RDO techniques as appropriate for base, on the grounds that some techniques may require no special settings or user intervention; the system simply learns about the workload and adapts.
However, benchmarking a system that includes RDO presents a challenge. A central idea of SPEC benchmarking is to create tests that are repeatable: if you run a benchmark suite multiple times, it is expected that results will be similar, although there will be a small degree of run-to-run variation. But an adaptive system may recognize the program that it is asked to run, and "carry over" lessons learned in the previous execution; therefore, it might complete a benchmark more quickly each time it is run. Furthermore, unlike in real life, the programs in the benchmark suite are presented with the same inputs each time they are run: value prediction is too easy if the inputs never change. In the extreme case, an adaptive system could be imagined that notices which program is about to run, notices what the inputs are, and which reduces the entire execution to a print statement. In the interest of benchmarking that is both repeatable and representative of real-life usage, it is therefore necessary to place limits on RDO carry-over.
Run time dynamic optimization is allowed, subject to the usual provisions that the techniques must be generally available, documented, and supported. It is also subject to the conditions listed in the rules immediately following.
Section 4.2 applies to run-time dynamic optimization: any settings which the tester has set to non-default values must be disclosed. Resources consumed by RDO must be included in the description of the hardware configuration as used by the benchmark suite.
For example, suppose that a system can be described as a 64-core system. After experimenting for a while, the tester decides that the optimum performance is achieved by dedicating 4 cores to the run-time dynamic optimizer, and running the benchmarks with only 60 threads. The system under test is still correctly described as a 64-core system, even though only 60 cores were used to run SPEC code.
Run time dynamic optimization is subject to Section 3.4: settings cannot be changed at run-time. But Note 2 of rule 3.4 also applies to RDO: for example, in peak it would be acceptable to compile a subset of the benchmarks with a flag that suggests to the run-time dynamic optimizer that code rearrangement should be attempted. Of course, Section 2.2.1 also would apply: such a flag could not tell RDO which routines to rearrange.
If run-time dynamic optimization is effectively enabled for base (after taking into account the system state at run-time and any compilation flags that interact with the run-time state), then RDO must comply with 2.3.1, the safety rule. It is understood that the safety rule has sometimes required judgment, including deliberation by SPEC in order to determine its applicability. The following is intended as guidance for the tester and for SPEC:
As described in section 3.7.1, SPEC has an interest in preventing carry-over of information from run to run. Specifically, no information may be carried over which identifies the specific program or executable image. Here are some examples of behavior that is, and is not, allowed.
It doesn't matter whether the information is intentionally stored, or just "left over"; if it's about a specific program, it's not allowed:
If information is left over from a previous run that is not associated with a specific program, that is allowed:
Any form of RDO that uses memory about a specific program is forbidden:
The system is allowed to respond to the currently running program, and to the overall workload:
SPEC requires a full disclosure of results and configuration details sufficient to reproduce the results. For results published on its web site, SPEC also requires that base results be published whenever peak results are published. If peak results are published outside of the SPEC web site (http://www.spec.org/omp2012/) in a publicly available medium, the tester must supply base results on request. Publication of results under non-disclosure or company internal use or company confidential are not "publicly" available.
A full disclosure of results must include:
A full disclosure of results must include sufficient information to allow a result to be independently reproduced. If a tester is aware that a configuration choice affects performance, then s/he must document it in the full disclosure.
Note: this rule is not meant to imply that the tester must describe irrelevant details or provide massively redundant information.
For example, if the SuperHero Model 1
comes with a write-through cache, and the SuperHero Model 2 comes with
a write-back cache, then specifying the model number is sufficient, and
no additional steps need to be taken to document the cache protocol.
But if the Model 3 is available with both write-through and write-back
caches, then a full disclosure must specify which cache is used.
For information on how to publish a result on SPEC's web site, contact the SPEC office. Contact information is maintained at the SPEC web site, http://www.spec.org/.
If a tester publishes results for a hardware or software configuration that has not yet shipped,
Note 1: "Generally available" is defined in the SPEC High Performance Group Policy document, which can be found at http://www.spec.org/hpg/policy.html.
Note 2: It is acceptable to test larger configurations than customers are currently ordering, provided that the larger configurations can be ordered and the company is prepared to ship them.
For example, if the SuperHero is available in configurations of 1 to 1000 CPUs, but the largest order received to date is for 128 CPUs, the tester would still be at liberty to test a 1000 CPU configuration and publish the result.
A "pre-production", "alpha", "beta", or other pre-release version of a compiler (or other software) can be used in a test, provided that the performance-related features of the software are committed for inclusion in the final product.
The tester must practice due diligence to ensure that the tests do not use an uncommitted prototype with no particular shipment plans. An example of due diligence would be a memo from the compiler Project Leader which asserts that the tester's version accurately represents the planned product, and that the product will ship on date X.
The final, production version of all components must be generally available within 90 days after first public release of the result.
When specifying a software component name in the results disclosure, the component name that should be used is the name that customers are expected to be able to use to order the component, as best as can be determined by the tester. It is understood that sometimes this may not be known with full accuracy; for example, the tester may believe that the component will be called "TurboUnix V5.1.1" and later find out that it has been renamed "TurboUnix V5.2", or even "Nirvana 1.0". In such cases, an editorial request can be made to update the result after publication.
Some testers may wish to also specify the exact identifier of the version actually used in the test (for example, "build 20070604"). Such additional identifiers may aid in later result reproduction, but are not required; the key point is to include the name that customers will be able to use to order the component.
The configuration disclosure includes fields for both "Hardware Availability" and "Software Availability". In both cases, the date which must be used is the date of the component which is the last of the respective type to become generally available. The date is specified Mmm-YYYY as in the following examples: Jan-2007, Feb-2007. The Month is abbreviated to three letters with the first letter capitalized. A hyphen separates the Month and Year fields. The Year field is specified with four digits.
Since all components must be available within 90 days of the first public release of the result, the first day of the specified Month (and Year) must fall within this 90 day window.
If a software or hardware date changes, but still falls within 90 days of first publication, a result page may be updated on request to SPEC.
If a software or hardware date changes to more than 90 days after first publication, the result is considered Non Compliant. For procedures regarding Non Compliant results, see the SPEC High Performance Group Policy Document, http://www.spec.org/hpg/policy.html.
SPEC is aware that performance results for pre-production systems may sometimes be subject to change, for example when a last-minute bugfix reduces the final performance.
For results measured on pre-production systems, if the tester becomes aware of something that will reduce production system performance by more than 5% on an overall metric, the tester is required to republish the result, and the original result shall be considered Non Compliant.
Analogous rules apply to performance changes across post-production upgrades (Section 4.3.4).
The following sections describe the various elements that make up the disclosure of the system configuration tested. The SPEC tools allow setting this information in the configuration file, prior to starting the measurement (i.e. prior to the runspec command).
It is also acceptable to update the information after a measurement has been completed, by editing the rawfile. Rawfiles include a marker that separates the user-editable portion from the rest of the file.
# =============== do not edit below this point ===================
Edits are forbidden beyond that marker.
(There is information about rawfile updating in the rawformat section of the document utility.html.)
Details are
SPEC recommends that measurements be done on the actual systems for which results are claimed. Nevertheless, SPEC recognizes that there is a cost of benchmarking, and that multiple publications from a single measurement may sometimes be appropriate. For example, two systems badged as "Model A" versus "Model B" may differ only in the badge itself; in this situation, differences are sometimes described as only "paint deep", and a tester may wish to perform only a single test (i.e. the runspec tool is invoked only once, and multiple rawfiles are prepared with differing system descriptions).
Although paint is usually not a performance-relevant difference, for other differences it can be difficult to draw a precise line as to when two similar systems should no longer be considered equivalent. For example, what if Model A and B come from different vendors? Use differing firmware, power supplies, or line voltage? Support additional types or numbers of disks, or other devices?
For SPEC OMP2012, a single measurement may be published as multiple equivalent results provided that all of the following requirements are met:
Performance differences from factors such as those listed in the paragraph above (paint, vendor, firmware, and so forth) are within normal run-to-run variation.
The CPU is the same.
The motherboards are the same:
same motherboard manufacturer
same electrical devices (for example, IO support chips, memory slots, PCI slots)
same physical shape.
The memory systems are the same:
same caches
same number of memory modules
memory modules are run at the same speed
memory modules comply with same specifications, where applicable (for example, the same labels as determined by the JEDEC DDR3 DIMM Label Specification).
As tested, all hardware components are supported on both systems.
For example, the Model A and Model B meet the requirements listed above, including a motherboard with the same number of DIMM slots. The Model A can be fully populated with 96 DIMMs. Due to space and thermal considerations, the Model B can only be half-populated; i.e. it is not supported with more than 48 DIMMs. If the actual sytem under test is the Model A, the tester must fill only the DIMM slots that are allowed to be filled for both systems.
Disclosures must reference each other, and must state which system was used for the actual measurement. For example:
This result was measured on the Acme Model A. The Acme Model A and the Bugle Model B are equivalent.
No power measurement is made. A single power measurement may not be published as multiple equivalent systems.
When a single measurement is used for multiple systems, SPEC may ask for a review of the differences between the systems, may ask for substantiation of the requirements above, and/or may require that additional documentation be included in the publications.
The system will consist of one node. Both the Hardware and the Software are described:
An example may help to clarify these four points:
For example, when first introduced, the TurboBlaster series is available with only one instruction set, and runs at speeds up to 2GHz. Later, a second instruction set (known as "Arch2") is introduced and older processors are commonly, but informally, referred to as having employed "Arch1", even though they were not sold with that term at the time. Chips with Arch2 are sold at speeds of 2GHz and higher. The manufacturer has chosen to call both Arch1 and Arch2 chips by the same formal chip name (TurboBlaster).
The current fields are:
Regarding the fields in the above list that mention the word "enabled": if a node, chip, core, or thread is available for use during the test, then it must be counted. If one of these resources is disabled - for example by a firmware setting prior to boot - then it need not be counted, but the tester must exercise due diligence to ensure that disabled resources are truly disabled, and not silently giving help to the result.
Regarding the field (hw_ncoresperchip), the tester must count the cores irrespective of whether they are enabled.
Example: In the following tests, the SUT is a Turboblaster Model 32-64-256, which contains 32 chips. Each chip has 2 cores. Each core can run 4 hardware threads.
hw_ncores: 64 hw_nchips: 32 hw_ncoresperchip: 2 hw_nthreadspercore: 4
hw_ncores: 32 hw_nchips: 32 hw_ncoresperchip: 1 hw_nthreadspercore: 1
Note: if resources are disabled, the method(s) used for such disabling must be documented and supported.
1 to 8 TurboCabinets. Each TurboCabinet contains 4 chips.
This section describes the compiler invocation and running of the benchmarks. Details are:
System State: Run level 4 (multi-user with display manager)
On other systems:
Note: some Unix (and Unix-like) systems have deprecated the concept of "run levels", preferring other terminology for state description. In such cases, the system state field should use the vocabulary recommended by the operating system vendor.
Scripted Installations and Pre-configured Software: In order to reduce the cost of benchmarking, test systems are sometimes installed using automatic scripting, or installed as preconfigured system images. A tester might use a set of scripts that configure the corporate-required customizations for IT Standards, or might install by copying a disk image that includes best practices of the performance community. SPEC understands that there is a cost to benchmarking, and does not forbid such installations, with the proviso that the tester is responsible to disclose how end users can achieve the claimed performance (using appropriate fields above).
Example: the Corporate Standard Jumpstart Installation Script has 73 documented customizations and 278 undocumented customizations, 34 of which no one remembers. Of the various customizations, 17 are performance relevant for SPEC CPU2006 - and 4 of these are in the category "no one remembers". The tester is nevertheless responsible for finding and documenting all 17. Therefore to remove doubt, the tester prudently decides that it is less error-prone and more straightforward to simply start from customer media, rather than the Corporate Jumpstart.
Any additional notes such as listing any use of SPEC-approved alternate sources or tool changes.
For example, suppose the tester uses a pre-release compiler with:
f90 -O4 --newcodegen --loopunroll:outerloop:alldisable
but the tester knows that the new code generator will be automatically applied in the final product, and that the spelling of the unroll switch will be simpler than the spelling used here. The recommended spelling for customers who wish to achieve the effect of the above command will be:
f90 -O4 -no-outer-unroll
In this case, the flags report will include the actual spelling used by the tester, but a note should be added to document the spelling that will be recommended for customers.
SPEC OMP2012 provides benchmarks in source code form, which are compiled under control of SPEC's toolset. The SPEC tools automatically detect the use of compilation and linkage flags in the config file and document them in the benchmark configuration section of the final report. Both portability and optimization flags will be captured in the report subsection.
The SPEC tools require a flag description file which provides information about the syntax of the flags and their meanings. A result will be marked "invalid" unless it has an associated flag description file. A description of how to write one may be found at www.spec.org/omp2012/Docs .
The level of detail in the description of a flag is expected to be sufficient so that an interested technical reader can form a preliminary judgment of whether he or she would also want to apply the option.
It is acceptable, and even common practice, for testers to build on each others' flags files, copying all or part of flags files posted by others into their own flags files; but doing so does not relieve an individual tester of the responsibility to ensure that the description is accurate.
Although these descriptions have historically been called "flags files", they must also include descriptions of other performance-relevant options that have been selected, including but not limited to environment variables, kernel options, file system tuning options, BIOS options, and options for any other performance-relevant software packages.
If an optional power measurement is reported, the following must included. More information on the fields can be found in the config file documentation.
SPEC OMP2012 results are for systems, not just for chips: it is required that a user be able to obtain the system described in the result page and reproduce the result (within a small range for run-to-run variation).
Nevertheless, SPEC recognizes that chip and motherboard suppliers have a legitimate interest in OpenMP benchmarking. For those suppliers, the performance-relevant hardware components typically are the cpu chip, motherboard, and memory; but users would not be able to reproduce a result using only those three. To actually run the benchmarks, the user has to supply other components, such as a case, power supply, and disk; perhaps also a specialized CPU cooler, extra fans, a disk controller, graphics card, network adapter, BIOS, and configuration software.
Such systems are sometimes referred to as "white box", "home built", "kit built", or by various informal terms. For SPEC purposes, the key point is that the user has to do extra work in order to reproduce the performance of the tested components; therefore, this document refers to such systems as "user built".
For user built systems, the configuration disclosure must supply a parts list sufficient to reproduce the result. As of the listed availability dates in the disclosure, the user should be able to obtain the items described in the disclosure, spread them out on an anti-static work area, and, by following the instructions supplied with the components, plus any special instructions in the SPEC disclosure, build a working system that reproduces the result. It is acceptable to describe components using a generic name (e.g. "Any ATX case"), but the recipe must also give specific model names or part numbers that the user could order (e.g. "such as a Mimble Company ATX3 case").
Component settings that are listed in the disclosure must be within the supported ranges for those components. For example, if the memory timings are manipulated in the BIOS, the selected timings must be supported for the chosen type of memory.
Components for a user built system may be divided into two kinds: performance-relevant (for SPEC OMP2012), and non-performance-relevant. For example, SPEC OMP2012 benchmark scores are affected by memory speed, and motherboards often support more than one choice for memory; therefore, the choice of memory type is performance-relevant. By contrast, the motherboard needs to be mounted in a case. Which case is chosen in not normally performance-relevant; it simply has to be the correct size (e.g. ATX, microATX, etc).
Example:
hw_cpu_name = Frooble 1500 hw_memory = 2 GB (2x 1GB Mumble Inc Z12 DDR2 1066) sw_other = SnailBios 17 notes_plat_000 = notes_plat_005 = The BIOS is the Mumble Inc SnailBios Version 17, notes_plat_010 = which is required in order to set memory timings notes_plat_015 = manually to DDR2-800 5-5-5-15. The 2 DIMMs were notes_plat_020 = configured in dual-channel mode. notes_plat_025 = notes_plat_030 = A standard ATX case is required, along with a 500W notes_plat_035 = (minimum) ATX power supply [4-pin (+12V), 8-pin (+12V) notes_plat_040 = and 24-pin are required]. An AGP or PCI graphics notes_plat_045 = adapter is required in order to configure the system. notes_plat_050 = notes_plat_055 = The Frooble 1500 CPU chip is available in a retail box, notes_plat_060 = part 12-34567, with appropriate heatsinks and fan assembly. notes_plat_065 = notes_plat_070 = As tested, the system used a Mimble Company ATX3 case, notes_plat_075 = a Frimble Ltd PS500 power supply, and a Frumble notes_plat_080 = Corporation PCIe Z19 graphics adapter. notes_plat_085 =
Additional notes:
Note 1: Regarding graphics adapters:
Note 2: Regarding power modes: Sometimes CPU chips are capable of running with differing performance characteristics according to how much power the user would like to spend. If non-default power choices are made for a user built system, those choices must be documented in the notes section.
Note 3: Regarding cooling systems: Sometimes CPU chips are capable of running with degraded performance if the cooling system (fans, heatsinks, etc.) is inadequate. When describing user built systems, the notes section must describe how to provide cooling that allows the chip to achieve the measured performance.
The actual test results consist of the elapsed times and ratios for the individual benchmarks and the overall SPEC metric produced by running the benchmarks via the SPEC tools. The required use of the SPEC tools ensures that the results generated are based on benchmarks built, run, and validated according to the SPEC run rules.
Below is a list of the measurement components for each SPEC OMP2012 suite and metric:
These are calculated as follows:
All runs of a specific benchmark when using the SPEC tools are required to have validated correctly. The benchmark executables must have been built according to the rules described in section 2 above.
Below is a list of the energy metrics for each SPEC OMP2012 suite:
These are calculated as follows:
Publication of energy and peak performance results are considered optional by SPEC, so the tester may choose to publish only base performance results. Since by definition base performance results adhere to all the rules that apply to peak performance results, the tester may choose to refer to these results by either the base or peak metric names (e.g. SPECompG_base2012 or SPECompG_peak2012) or the name SPECompG_2012 whose value is the greater of SPECompG_base2012 and SPECompG_peak2012.
It is permitted to publish base-only results. Alternatively, the use of the flag basepeak is permitted, as described in section 3.6.
SPEC OMP2012 metrics may be estimated, with the exception of power measurements. All estimates must be clearly identified as such. It is acceptable to estimate a single metric (for example, SPECompG_base2012, or SPECompG_peak2012, or the elapsed seconds for 362.fma3d). Note that it is permitted to estimate a peak metric without being required to provide a corresponding estimate for base.
SPEC requires that every use of an estimated number be clearly marked with "est." or "estimated" next to each estimated number, rather than burying a footnote at the bottom of a page.
For example, say that the JumboFast will achieve estimated performance of:
Model 1 SPECompG_base2012 50 est. SPECompG_peak2012 60 est. Model 2 SPECompG_base2012 70 est. SPECompG_peak2012 80 est.
If estimates are used in graphs, the word "estimated" or "est." must be plainly visible within the graph, for example in the title, the scale, the legend, or next to each individual result that is estimated.
Note: the term "plainly visible" in this rule is not defined; it is intended as a call for responsible design of graphical elements. Nevertheless, for the sake of giving at least rough guidance, here are two examples of the right way and wrong way to mark estimated results in graphs:
Licensees are encouraged to give a rationale or methodology for any estimates, together with other information that may help the reader assess the accuracy of the estimate. For example:
Those who publish estimates are encouraged to publish actual SPEC OMP2012 metrics as soon as possible.
As mentioned previously, performance may sometimes change for pre-production systems; but this is also true of production systems (that is, systems that have already begun shipping). For example, a later revision to the firmware, or a mandatory OS bugfix, might reduce performance.
For production systems, if the tester becomes aware of something that reduces performance by more than 5% on an overall metric (for example, SPECompG_2012 or SPECompG_peak2012), the tester is encouraged but not required to republish the result. In such cases, the original result is not considered Non Compliant. The tester is also encouraged, but not required, to include a reference to the change that makes the results different (e.g. "with OS patch 20020604-02").
If a SPEC OMP2012 licensee publicly discloses an OMP2012 result (for example in a press release, academic paper, magazine article, or public web site), and does not clearly mark the result as an estimate, any SPEC member may request that the rawfile(s) from the run(s) be sent to SPEC. The rawfiles must be made available to all interested members no later than 10 working days after the request. The rawfile is expected to be complete, including configuration information (section 4.2 above).
A required disclosure is considered public information as soon as it is provided, including the configuration description.
For example, Company A claims a result of 1000 SPECompG_peak2012. A rawfile is requested, and supplied. Company B notices that the result was achieved by stringing together 50 chips in single-user mode. Company B is free to use this information in public (e.g. it could compare the Company A system vs. a Company B system that scores 999 using only 25 chips in multi-user mode).
Review of the result: Any SPEC member may request that a required disclosure be reviewed by the SPEC HPG subcommittee. At the conclusion of the review period, if the tester does not wish to have the result posted on the SPEC result pages, the result will not be posted. Nevertheless, as described above, the details of the disclosure are public information.
When public claims are made about OMP2012 results, whether by vendors or by academic researchers, SPEC reserves the right to take action if the rawfile is not made available, or shows different performance than the tester's claim, or has other rule violations.
SPEC encourages use of the OMP2012 suite in academic and research environments. It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of testers who publish on the SPEC web site. For example, a research environment may use early prototype hardware that simply cannot be expected to stay up for the length of time required to meet the continuous run requirement (see section 3.5), or may use research compilers that are unsupported and are not generally available (see section 1).
Nevertheless, SPEC would like to encourage researchers to obey as many of the run rules as practical, even for informal research. SPEC respectfully suggests that following the rules will improve the clarity, reproducibility, and comparability of research results.
Where the rules cannot be followed, SPEC requires that the deviations from the rules be clearly disclosed, and that any SPEC metrics (such as SPECompG_2012) be clearly marked as estimated.
It is especially important to clearly distinguish results that do not comply with the run rules when the areas of non-compliance are major, such as not using the reference workload, or only being able to correctly validate a subset of the benchmarks.
Any public use of SPEC OMP2012 results must, at the time of publication, adhere to the then-currently-posted version of SPEC's Fair Use Rules, http://www.spec.org/fairuse.html.
If a competitive comparison uses SPEC OMP2012, it must use one or more of the following metrics:
The basis for comparison must be stated. Information from result pages may be used to define a basis for comparing a subset of systems, including but not limited to operating system version, cache size, memory size, compiler version, or compiler optimizations used.
All public statements regarding SPEC, its benchmarks, and especially results posted at www.spec.org, are required to be scrupulously correct as of the date listed in the public statement. However, there is no requirement to update public statements as new results are published. For example, if a web page says that the Turboblaster 1000 has "the best SPECompG_2012 result when compared versus all results published at www.spec.org through July 1, 2013", there is no requirement to change that web page if a better result is published on July 2.
Note: Regarding the use of non-median individual benchmark results: As described in section 4.3, each benchmark is run multiple times and the median is picked from the set of runs. Any result from such a set may be mentioned in a competitive comparison, provided that the median from the same set is also mentioned.
You have decided you want to not only run the benchmark, but to have your results published at SPEC. What do you do? First, be aware, that if you are not a member of SPEC HPG, there is a cost associated with publishing results at SPEC. Please check the SPEC website section submitting results, for more information about costs.
The following lists the steps to do when submitting results for review and publication.
If for some reason, the test sponsor cannot run the benchmarks as specified in these rules, the test sponsor can seek SPEC approval for performance-neutral alternatives. No publication may be done without such approval. The SPEC High Performance Group (HPG) maintains a Policies and Procedures document that defines the procedures for such exceptions.
Copyright © 1999-2012 Standard Performance Evaluation Corporation
All Rights Reserved