This document specifies how the SPECjms2007 benchmark is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by SPEC. This ensures that results generated with this benchmark are meaningful, comparable to other generated results, are repeatable, and can be duplicated using the result submission.
Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.
SPEC intends that this benchmark measure the overall performance of systems that provide environments for running standalone JMS-based messaging applications. It is not a Java EE benchmark and therefore does not measure the usage of JMS in Enterprise Java Beans (EJBs), servlets, Java Server Pages (JSPs), etc.
The general philosophy behind the rules for running the SPECjms2007 benchmark and reporting the results is to ensure that an independent party can reproduce the reported results. The System Under Test (SUT) is expected to be able to handle enterprise class requirements in the areas of reliability, availability and support.
For results to be publishable, SPEC requires:
SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. However, with the rules below, SPEC wants to increase the awareness by implementors and end users of issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.
In the case where it appears that the above guidelines have not been followed, SPEC may investigate such a claim and request that the offending optimization (e.g. a SPEC-benchmark specific pattern matching) be backed off and the results resubmitted. Or, SPEC may request that the vendor correct the deficiency (e.g. make the optimization more general purpose or correct problems with code generation) before submitting results based on the optimization.
Results must be reviewed and accepted by SPEC prior to public disclosure. The submitter must have a valid SPEC license for this benchmark to submit results. Furthermore, SPEC requires that any public use of results from this benchmark shall follow the SPEC/OSG Fair Use Rules and those specific to this benchmark (see the Fair Use section below). In the case where it appears that these guidelines have been violated, SPEC may investigate and request that the offence be corrected or the results resubmitted.
SPEC reserves the right to modify the benchmark codes, workloads, and rules of SPECjms2007 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees whenever it makes changes to the benchmark and may rename the metrics. In the event that the workload or metric is changed, SPEC reserves the right to republish in summary form "adapted" results for previously published systems, converted to the new metric. In the case of other changes, a republication may necessitate retesting and may require support from the original test sponsor.
The application scenario chosen for SPECjms2007 models the supply chain of a supermarket company. The participants involved are the supermarket company, its stores, its distribution centers and its suppliers. The scenario offers an excellent basis for defining interactions that stress different subsets of the functionality offered by JMS Servers, e.g. different message types as well as both point-to-point and publish/subscribe communication. It also offers a natural way to scale the workload, e.g. by scaling the number of Supermarkets (in Horizontal Topology) or by scaling the amount of products sold per Supermarket (Vertical Topology).
For all further details of the scenario and its implementation please refer to the SPECjms2007 Design Document.
The SPECjms2007 Kit refers to the complete kit provided for running the SPECjms2007 benchmark. The SPECjms2007 Kit includes all documentation, source and compiled binaries for the benchmark.
The Provider Module refers to an implementation of the org.spec.perfharness.jms.providers.JMSProvider interface. A default, provider-independent implementation that uses JNDI is included in the SPECjms2007 Kit. A product-specific Provider Module may be required to run the benchmark on some JMS products.
The JMS Server or Server refers to the pieces of hardware and software that provide the JMS facilities to JMS Clients. It may be comprised of multiple hardware and/or software components and is viewed as a single logical entity by JMS Clients. The Server also includes all the stable storage for persistence as required by the JMS Specification.
The JMS Clients or Clients refer to Java application components that use the JMS API. The SPECjms2007 benchmark is a collection of Clients in addition to other components required for benchmark operation.
Destination refers to a JMS destination, which is either a queue or a topic.
Location refers to a single logical entity in the benchmark application scenario. The four entities defined in the SPECjms2007 benchmark are Supermarket (SM), Supplier (SP), Distribution Center (DC), and Headquarters (HQ).
A Topology is the configuration of Locations being used by a particular benchmark run. The SPECjms2007 benchmark has two controlled topologies, the Vertical Topology and the Horizontal Topology, that are used for Result Submissions. The Freeform Topology allows the user complete control over the benchmark configuration.
The BASE parameter is the fundamental measure of performance in the SPECjms2007 benchmark. It represents the throughput performance of a benchmark run in a particular Topology. Each benchmark Topology uses a different metric to report the result and the value of the metric, which is a measure of the SUT performance in that Topology, is the BASE parameter. In the Vertical Topology, the metric is called SPECjms2007@Vertical and in the Horizontal Topology, the metric is called SPECjms2007@Horizontal. In other words, the BASE parameter is the result achieved for a particular Topology. The BASE parameter can not be compared across the different Topologies, i.e. a result achieved in the Horizontal Topology can not be compared with a result achieved in the Vertical Topology.
An Interaction is a defined flow of messages between one or more Locations. The Interaction is a complete message exchange that accomplishes a business operation. SPECjms2007 defines seven (7) Interactions of varying complexity and length.
A Flow Step is a single step of an Interaction. An Interaction therefore is comprised of multiple Flow Steps.
An Event Handler (EH) is a Java Thread that performs the messaging logic of a single Flow Step. A Flow Step may use multiple Event Handlers to accomplish all of its messaging as required by the benchmark. In relation to the SPECjms2007 benchmark, the Clients are the Event Handlers, which are the components in which all JMS operations are performed.
A Driver (DR) is an Event Handler that does not receive any JMS messages. Drivers are those Event Handlers that initiate Interactions by producing JMS messages. Although they are only the initiators in the JMS message flow and not JMS message receivers, they are included collectively under Event Handlers as they are modelling the handling of other, non-JMS business events (e.g. RFID events) in the benchmark application scenario.
The System Under Test (SUT) is comprised of all hardware and software components that are being tested. The SUT includes all the Server machines and Client machines as well as all hardware and software needed by the Server and Clients.
An Agent is a collection of Event Handlers (EHs which includes DRs) that are associated with a Location. A Location can be represented by multiple Agents.
An AgentJVM is a collection of Agents that run in a single Java Virtual Machine (JVM).
The Controller (also referred to as the ControlDriver) is the benchmark component that drives the SPECjms2007 benchmark. There is exactly one Controller when the benchmark is run. The Controller reads in all of the configuration, instantiates the Topology (the Locations to use and connections between them), monitors progress, coordinates phase changes and collects statistics from all the components.
A Satellite (also referred to as a SatelliteDriver) is the benchmark component that runs AgentJVMs and is controlled by the Controller.
The Framework is the collective term used for the Controller, Satellite and benchmark coordination classes.
A Node is a machine in the SUT. The four kinds of nodes are, server nodes, client nodes, db nodes, and other nodes. A server-node is any node that runs the JMS Provider's server software. The client-nodes run the benchmark components and must each run exactly one Satellite. If the JMS product is configured with a database which runs on machines that are separate from the ones that the JMS server software runs on, then these machines are referred to as database nodes. Any other machines that are needed for the JMS product operation that are not covered by the three nodes described above are included in other nodes.
The Delivery Time refers to the elapsed time measured between sending a specific message and that message being received.
Normal Operation refers to any time the product is running, or could reasonably be expected to run, without failure.
A Typical Failure is defined as a failure of an individual element (software or hardware) in the SUT. Some examples to qualify this include Operating System failure, interruption to electricity or networking or death of a single machine component (network, power, RAM, CPU, disk controller, or an individual disk, etc). It includes failure of the Server as well as Clients.
A Single-Point-of-Failure is defined as a single Typical Failure. It is not extended to cover simultaneous failures or larger scale destruction of resources.
Non-Volatile Storage refers to the mechanism by which persistent data is stored. Non-Volatile Storage must be online and immediately available for random access read/write. Archive storage (e.g. tape archives or other backups) does not qualify as it is not considered as online and immediately available.
The Measurement Period refers to length of time during which measurement of the performance of the SUT is made.
The Warmup Period refers to the period from the commencement of the benchmark run up to the start of the Measurement Period.
The Drain Period refers to the period from the end of the Measurement Period up to the end of the benchmark run.
The Final Version refers to the final version of a product that is made available to customers.
The Proposed Final Version refers to a version of a product, that while not a final version as defined above, is final enough to use to submit a benchmark result. Proposed final versions can include Beta versions of the product.
The General Availability (GA) date for a product is the date that customers can place an order for, or otherwise acquire the product.
The Run Result refers to the HTML file that is generated at the end of a benchmark run that indicates the pass/fail status of all the Interactions in the benchmark.
A Result Submission refers to a single JAR file containing a set of files describing a SPEC benchmark result that is submitted to SPEC for review and publication. This includes a Submission File, a Configuration Diagram and a Full Disclosure Archive.
The Submission File refers to an XML-based document providing detailed information on the SUT components and benchmark configuration, as well as some selected information from the benchmark report files generated when running SPECjms2007. The Submission File is created using the SPECjms2007 Reporter.
The Configuration Diagram refers to a diagram in common graphics format that depicts the topology and configuration of the SUT for a benchmark result.
The Full Disclosure Archive (FDA) refers to a soft-copy archive of all relevant information and configuration files needed for reproducing the benchmark result.
The Full Disclosure Report (FDR) refers to the complete report that is generated from the Submission File. The FDR is the format in which an official benchmark result is reported on the SPEC Web site.
The SPECjms2007 Reporter refers to a standalone utility provided as part of the SPECjms2007 Kit that is used to prepare a Result Submission. The SPECjms2007 Reporter generates the Submission File and gathers all information pertaining to the Result Submission into a directory to be packaged and submitted to SPEC for review and publication. The SPECjms2007 Reporter also generates a preview of the Full Disclosure Report to be verified by the submitter before submitting the result.
The following section details the rules under which the benchmark may be used, including in commercial, research and academic environments as well as the rules that govern disclosure of results that are obtained. The section also details some general rules that products used in benchmark Result Submissions must satisfy.
In order to publicly disclose SPECjms2007 results, the submitter must adhere to these reporting rules in addition to having followed the run rules described in this document. The goal of the reporting rules is to ensure the System Under Test is sufficiently documented such that someone could reproduce the test and its results.
Compliant runs need to be submitted to SPEC for review and must be accepted for publication prior to public disclosure. Result Submissions must include a Full Disclosure Archive containing all relevant information needed for reproducing the benchmark result (see Section 6 and Section 7 below). See Section 7 of the SPECjms2007 User's Guide for details on submitting results to SPEC.
Test results that have not been accepted by SPEC for publication must not be publicly disclosed except as noted in Section 3.7, Research and Academic Usage. Research and academic usage test results that have not been accepted and published by SPEC must not use the SPECjms2007 metrics ("SPECjms2007@Horizontal and SPECjms2007@Vertical").
SPEC/OSG Fair Use Rules are available at the SPEC web site: http://www.spec.org/osg/fair_use-policy.html.
When competitive comparisons are made using SPECjms2007 benchmark results, SPEC requires that the following template be used:
SPEC® and the benchmark name SPECjms® are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of (date). [The comparison presented above is based on (basis for comparison).] For the latest SPECjms2007 results visit http://www.spec.org/osg/jms2007.
(Note: [...] above required only if selective comparisons are used.)
See SPEC/OSG Fair Use Rules for further details and examples.
Estimated metrics are not allowed. The publication of any SPECjms2007 results, other than the explicit results that were submitted and accepted by SPEC, is not allowed.
SPECjms2007 results must not be publicly compared to results from any other benchmark.
Results between different Topologies (i.e. Vertical and Horizontal) within SPECjms2007 may not be compared; any attempt to do so will be considered a violation of SPEC Fair Use Rules.
SPEC encourages use of the SPECjms2007 benchmark in academic and research environments. The researcher is responsible for compliance with the terms of any underlying licenses (JMS Server, DBMS Server, hardware, etc.).
It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of licensees submitting to the SPEC web site. SPEC encourages researchers to obey as many of the run rules as practical, even for informal research. If research results are being published, SPEC requires:
The researcher is required to submit a full disclosure of any published results if requested by SPEC.
Public use of SPECjms2007 benchmark results are bound by the SPEC/OSG Fair Use Rules and the SPECjms2007 specific Run and Reporting Rules (this document). All publications must clearly state that these results have not been reviewed or accepted by SPEC using text equivalent to this:
SPECjms2007 is a trademark of the Standard Performance Evaluation Corporation (SPEC). The results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result. The official web site for SPECjms2007 is located at http://www.spec.org/osg/jms2007.
This disclosure must precede any results quoted from the tests. It must be displayed in the same font as the results being quoted.
SPECjms2007 has two metrics, SPECjms2007@Horizontal and SPECjms2007@Vertical. SPECjms2007@Horizontal is the measure of the SUT performance in the Horizontal Topology. SPECjms2007@Vertical is the measure of the SUT performance in the Vertical Topology.
The 'BASE' parameter is the fundamental measure of the SPECjms2007 benchmark. It represents the throughput performance of a benchmark run in a particular Topology. It is the value reported as the result achieved by the SUT for that Topology in the metric for that Topology. A result achieved in one Topology can not be compared with a result achieved in a different Topology - e.g. a result achieved in the Horizontal Topology can not be compared with a result achieved in the Vertical Topology.
A successful benchmark run results in the BASE parameter being reported as the measure of the performance of the SUT in that particular Topology. Each Topology uses the configured BASE parameter in conjunction with different constants for each interaction to determine the individual, per-interaction message injection rate.
All products, both software and hardware that are used in the generation of a benchmark result must be generally available (either free or for purchase) and must be documented and supported. Any customer must be able to order or otherwise acquire all the products that were used in the generation of a benchmark result within 3 months of the date of publication of the same result, which must have been reviewed and accepted by SPEC. If Java and/or JMS specification related licensing issues cause a change in product availability date after the publication date, the change will be allowed to be made without penalty, subject to subcommittee review.
The SUT includes but is not limited to all the following components:
Hardware maintenance and software support must include 7 days/week, 24 hours/day coverage for a minimum period of three years.
The response time for hardware maintenance requests must not exceed 4 hours on any component whose replacement is necessary for the SUT to return to the tested configuration.
Software support must include all available maintenance updates over the support period.
If a new or updated version of any software product is released causing earlier versions of said product to no longer be supported or encouraged by the providing vendor(s), new publications or submissions occurring after four complete review cycles have elapsed must use a version of the product encouraged by the providing vendor(s).
For example, with result review cycles ending April 16, April 30th, May 14th, May 28th, June 11th, and June 25th, if a new JDK version released between April 16th and April 29th contains critical fixes causing earlier versions of the JDK to no longer be supported or encouraged by the providing vendor(s), results submitted or published on June 25th must use the new JDK version.
If the support contract requires remote monitoring and/or telemetry, then all hardware and software required for remote operations to connect to the SUT must be installed and functional on the SUT during the measurement run.
The use of spare components in lieu of the 4 hour hardware maintenance response requirement is allowed if and only if the following conditions are met:
If spare components are used, then the BOM (see Section 18.104.22.168) must list at least an additional 10% of the quantity of these spare components with a minimum of 2 units. The additional spare components used and listed in the BOM must be rounded up whole units.
All products used in a Result Submission must be Final Versions or Proposed Final Versions. When the product is finally released, the product performance must not decrease by more than 5% of the published SPECjms2007 result.
Comment 1: The intent is to test products that customers will use, not prototypes. Beta versions of products can be used, provided that General Availability (GA) of the final product is within 3 months. If a beta version is used, the date reported in the results must be the GA date.
Comment 2: The 5% degradation limit only applies to a difference in performance between the tested product and the GA product. Subsequent GA releases (to fix bugs, etc) are not subject to this restriction.
The Measurement Period must be at least 30 minutes long. The Measurement Period must be preceded by a Warmup Period of at least 10 minutes at the end of which a steady state throughput level must be reached.
The Controller measures the time offset from each Satellite using calls to System.currentTimeMillis() over RMI. The maximum difference in reported time between any two Nodes must be within 100ms. This helps to ensure that time histograms of message Delivery Times are accurate across the SUT.
Comment: NTP is the standard system for applying time synchronisation to machines. Issues have been reported on Windows machines and there are many Internet articles on replacement NTP clients for such situations.
The reported metric must be computed over a Measurement Period during which the throughput level is in a steady state condition that represents the true sustainable performance of the SUT. Periodic activities must be equally dispersed within the Measurement Period. For example, if two checkpoints are required over a 24 hour period, they have to be performed at 10 and 20 minutes within the Measurement Period.
Comment: The intent is that any periodic fluctuations in the throughput or any cyclical activities, e.g. JVM garbage collection, database checkpoints, etc be included as part of the Measurement Period.
The realtime performance reporter must be turned on during the run and it must be configured to provide real time performance data every 5 minutes.
To demonstrate the reproducibility of the steady state requirement during the Measurement Period, a minimum of one additional (and non-overlapping) Measurement Period of the same duration as the reported one must be recorded and its performance measurement (in units of SPECjms2007@Horizontal or SPECjms2007@Vertical) must be equal to or greater than the reported one.
The SPECjms2007 benchmark requires that there be no failures or exceptions during a run. The benchmark will detect any Java Exceptions raised in any part of the benchmark and will terminate the run if any are observed. Warning messages are allowed to appear in submittable runs.
The benchmark includes an Auditor component that is responsible for making sure that the run has been properly executed and the results are valid. The Auditor is automatically launched at the end of the run to validate the results. It performs an explicit audit of the results against the requirements specified in this section marking each audit test as "PASS" or "FAIL" in the summary reports. A compliant run must not report failure of any audit test. Only results from compliant runs may be submitted for review, acceptance and publication. Non-compliant runs can not be submitted for review.
Note: SPECjms2007 does not require an "audit" in the sense of an independent evaluation by a human being.
|Input rate is within +-5% of configured value
|For each Interaction, the observed input rate is calculated as count/time for the Measurement Period. This must be within 5% of the value prescribed by the Topology. See Section 3.6.1 below.
|Total message count is within +-5% of configured value
|Using a model of the scenario, the benchmark knows how many messages should be processed as part of each Interaction. The observed number of messages sent and received by all parties must be within 5% of this value.
|Input rate distribution deviations do not exceed 20%
|The percentage of pacing misses (in Driver threads) the benchmark will allow. A miss is when the timing code found itself behind where it thought it should be.
|90th percentile of Delivery Times on or under 5000ms
|Messages are timestamped when sent and received. The consequent Delivery Time is recorded as a histogram in each Event Handler and it is this histogram (for the Measurement Period only) which must comply. See Section 3.6.2 below.
|All messages sent were received
|Fails if the final results (taken after the Drain Period) show that not all sent messages were received. For publish-subscribe topics this means received by all subscribers. See Section 3.6.3 below.
For each Interaction, the observed input rate (number of times the Interaction is initiated per sec) is calculated as count/time for the Measurement Period. This must be within 5% of the target input rate for the Topology. The target input rates for the Horizontal Topology are shown in Table 1 and for the Vertical Topology in Table 2 below.
Messages are timestamped when sent and received on each Flow Step within an Interaction. The consequent Delivery Time is recorded as a histogram in each receiving Event Handler. The 90th percentile of each of these histograms must be within 5000ms for every step of every Interaction.
Comment: The intent of this requirement is to ensure that all queues are in steady state condition during the run.
SPECjms2007 requires that every message sent is received. A configurable Drain Period after the Measurement Period is used to allow all sent messages to be received.
SPECjms2007 benchmark ships with pre-compiled jars. It is not allowed to submit results using any user-compiled version of the benchmark other than the version compiled and released by SPEC. The only component of the benchmark that may be compiled is the product-specific Provider Module. If a product-specific Provider Module is used, the sources must be included in the submission as part of the FDA (see Section 7.1.1).
Benchmark specific optimization is not allowed. Any optimization of either the configuration or products used on the SUT must improve performance for a larger class of workloads than that defined by this benchmark and must be supported and recommended by the provider. Optimizations must have the vendor's endorsement and must be suitable for production use in an environment comparable to the one represented by the benchmark. Optimizations that take advantage of the benchmark's specific features are forbidden.
An example of an inappropriate optimization is one that requires access to the source code of the application.
Comment: The intent of this section is to encourage optimizations to be done automatically by the products.
The Server facilities (hardware and software) must be separated from the hardware running the Clients by a network. There must not be JMS Server components running on any Client hardware.
Although the benchmark implementation allows the user to do this, formal Result Submissions must not allow Clients to be hosted on the Server. This matches the geographically-dispersed nature of the business scenario.
All Client traffic must be routed (via the network) through the JMS Server. Optimizing traffic to happen directly between Clients is prohibited.
Agents of different types cannot be co-located within the same JVM. Agents of the same type representing different Locations must be isolated when co-located in the same JVM. The benchmark enforces this by using an isolating classloader for Agents within the same JVM.
The SUT deployment must not preclude external JMS applications from interacting with the JMS Server and JMS Destination and the messages that are produced and consumed by the benchmark. Product configuration must not assume that the benchmark application is the only Client for the JMS provider and thus optimize for operation restricted to the benchmark alone.
Clients must not be configured to connect to specific instances within a JMS Server. This means for example, that although different classes of Event Handlers can have individual ConnectionFactories to tailor how they connect, they must all address the JMS Server identically.
Comment: The intent of this restriction is to ensure that the benchmark tester cannot take advantage of the knowledge of the benchmark workload to configure different Interactions to run on different parts of the Server.
The JMS Server tested must provide a suitable runtime environment for running JMS applications and must meet the requirements of the Java Message Service Specification version 1.1 or as amended by SPEC for later JMS versions. The JMS Specification is described by the documentation at http://java.sun.com/products/jms/docs.html
SPECjms2007 does not exercise the optional features of the JMS Specification.
Comment: The intent of this requirement is to ensure that the JMS Server tested is a complete implementation satisfying all mandatory requirements of the JMS specification and to prevent any advantage gained by a product that implements only an incomplete or incompatible subset of the JMS specification.
SPECjms2007 requires all JMS messages to be delivered in order without being dropped or duplicated. Although the benchmark assumes Normal Operation of the JMS Provider for the entire benchmark run, SPECjms2007 has explicit recoverability guarantees that must be met by the SUT.
For non-persistent, AUTO_ACKNOWLEDGE mode message delivery, this implies that, for a Session, the provider must not deliver the second message to the application until after the first message is fully acknowledged.
In the case of asynchronous consumption, this implies that the message listener must not be called with a message until after the previous message that was delivered to the message listener has been fully acknowledged and will not be redelivered.
In the case of synchronous consumption, this implies that the call to receive from the application must not return until after the previous message consumed by a receive call in the same Session has been fully acknowledged and will not be redelivered.
All Non-Persistent messages in the SPECjms2007 benchmark are produced and consumed in a non-transactional, AUTO_ACKNOWLEDGE Session.
For delivery (receipt) of persistent messages, the JMS provider must ensure that it is able to correctly set the state of the JMSRedelivered message header, as specified by the JMS Specification, in case the message that is currently being delivered (received), has to be redelivered (re-received) in the case of a Typical Failure.
The JMS Specification allows a JMS provider to drop persistent messages that are sent to a Topic for which there is neither an active subscriber or an inactive durable subscriber. In addition, persistent messages may be dropped if insufficient resources are configured.
Per sec 4.10 of the JMS Specification - "It is expected that important messages will be produced with a PERSISTENT delivery mode within a transaction and will be consumed within a transaction from a nontemporary queue or a durable subscription." Accordingly, all Persistent messages in the SPECjms2007 are produced and consumed in a transacted Session.
After a failure, the SUT must be able to perform the system recovery function within 15 minutes. In the case of a Typical Failure, it must be possible to recover all persistent messages without involving Clients (i.e. by starting only the Server facilities).
Comment: This requirement must be met in conjunction with the Steady State Requirements in Section 3.3 (e.g., ensuring that check pointing etc. is frequent enough to guarantee that system recovery can happen within 15 minutes.)
The Atomicity, Consistency, Isolation and Durability (ACID) properties of transaction processing systems must be supported by the SUT in both Normal Operation and Typical Failure. The SUT must guarantee that all transactions are atomic, meeting at least XA's atomicity requirements. The SUT will either perform all or none of the individual operations that comprise the transaction.
A transacted-session commit call must either (A) block until all commit work is complete, or (B) throw an exception in the event of a failure. Committed work includes all work necessary to durably record the production and consumption of the transaction's messages on a JMS Server. It also includes all work necessary to ensure that any message successfully consumed as part of the transaction will not be redelivered, even in the event of a Typical Failure.
The JMS Server must provide at least a read-committed isolation level to prevent dirty reads. This applies to both persistent and non-persistent messages in a transaction.
Persistent messages are considered "fully persisted" if and only if they are saved in Non-Volatile Storage that requires no electricity or is covered by a battery-backed cache.
All Non-Volatile Storage must be hosted in the Server facilities.
The Server must have sufficient online Non-Volatile Storage to support any expanding durable storage resulting from executing the SPECjms2007 benchmark for twenty four hours at the reported performance metric.
A SPECjms2007 Result Submission is a single jar file containing the following components:
The intent is that using the above information it must be possible to reproduce the benchmark result given the equivalent hardware, software, and configuration. Please refer to Section 7 of the SPECjms2007 User Guide for technical information on how to package a Result Submission. In the following, we take a detailed look at the required submission data.
submission/submission.xml file contains user-declared
information on all static elements of the SUT and the benchmark configuration
used to produce the benchmark result. This information is structured
in four sections:
The latter describe the benchmark run, software products, hardware systems
and system configurations, respectively. ID attributes are used to link
elements together by an XSLT transformation. The user is responsible
for fully and correctly completing these sections providing all relevant
information. In the following, the specific information that must be
reported is described.
This section contains general details on the benchmark result.
|The SPEC license holder that is submitting this result to SPEC.
|The title for the result as you wish it to appear on the Run Result page. This title will also appear in the title of the HTML page.
|The SPEC license number (or membership number) of the company that is submitting this result for review.
|The date the result was submitted (e.g. Wed Jan 20 03:03:00 EDT 2007).
|yes/no - this must be set to 'no' for compliant results.
|Text explaining reason for noncompliant (NC) marking.
|Link(s) to replacement result(s) if available. Leave empty otherwise.
|Total number of JMS Server nodes.
|Total number of JMS Server CPU cores.
|Total number of JMS Server CPU chips.
|Number of JMS Server CPU cores per chip.
|Total number of instances of the JMS Server.
|Total number of Client Nodes.
|Total number of Client CPU cores.
|Total number of Client CPU chips.
|Number of Client CPU cores per chip.
|Total number of Client JVMs.
|Total number of DB Server nodes in the SUT.
|Total number of DB Server CPU cores.
|Total number of DB Server CPU chips.
|Number of DB Server CPU cores per chip.
|1 - The total number of DB Server instances in the SUT.
|List of other components (e.g. routers, load balancers, etc.) that are part of the SUT.
|Any relevant information regarding periodic activities (e.g. checkpoints) run during the Measurement Period to ensure that the throughput level is in a steady state condition that represents the true sustainable performance of the SUT (see Section 3.3).
|Explanation of how the recoverability requirement (section 4.2.3) was met.
|Explanation of how the Non-Volatile Storage requirements (section 5.1) was met.
|Any additional information on the benchmark configuration
must be included here.
|Bill of Materials. See Section 6.2 for information on the expected content and format.
product-info section contains information about all software
products used to run the benchmark. Products include, but are not limited
For JVM products, the submitter must disclose the Operating System and hardware architecture this JVM is built for. In addition, 64-bit or 32-bit version for the JVM must be indicated.
Contains information about a product that is part of the SUT. The name
attribute specifies the type of the product and must be set to one of
must only be included in cases where the JMS Server is configured to
use a separate database server product for message persistence. Otherwise,
they should be deleted. The
other-product properties should
be included if there are further products needed for the benchmark operation
that do not fall into the first four categories.
|Unique string to identify the product.
|Name of the product.
|Operating system the product was run on.
|Availability date of the product. This date must be in the form Mon-YYYY to be read by the SPECjms2007 Reporter correctly.
hardware-info section contains information about the
hardware systems used in the benchmark.
Information about the hardware system used to run a specific product.
The name attribute specifies the type of node,
db-node properties should only be included in cases
where the JMS Server is configured with a database which runs on machines
that are separate from the ones that the JMS server software runs. The
should be included if there are other machines needed for the JMS server
operation that do not fall into the first three categories.
|Unique string to identify the hardware system.
|Name of the hardware system.
|Hardware system vendor.
|Model number for the system.
|Type of processor in the system.
|The speed of the chip, in MegaHertz. DO NOT use "MHz" or "GHz", as it will interfer with SPEC's use of this field.
|Hardware availability date. This date must be in the form Mon-YYYY to be read by the SPECjms2007 Reporter correctly.
|File system used by the system.
|Size and type of disk(s) used by the system. Note: For systems with a complex description for the disks, simply indicate "see notes" and put the description in the notes section below.
|The network interface(s) used on this system.
|Any other hardware in this system that is performance related. That is, any non-standard piece of hardware necessary to reproduce this result, e.g. external storage systems.
|Number of CPU cores.
|Number of CPU chips.
|Number of CPU cores per chip.
|Amount of physical RAM in the system, IN MEGABYTES. DO NOT use "MB" or "GB", as it will interfer with SPEC's use of this field.
|Amount of level 1 cache, for both instruction (I) and data (D) on EACH CPU. Also state that the cache is per core, per chip, group of cores, etc. if they are not the same, e.g. 16KB(I)+16KB(D) per core.
|Amount of level 2 cache on each CPU core or chip.
|The amount of level 3 (or above) cache on each CPU core or chip. If there is no other cache, leave this field blank.
|Name of the operating system running on this hardware.
|Operating system vendor.
|Operating system availability date. This date must be in the form Mon-YYYY to be read by the SPECjms2007 Reporter correctly.
|The number of systems (with this description) used in the submission.
|Any additional information on the hardware including anything that has to be done to configure the hardware for running the benchmark.
For nodes of type
other-node only the following properties
id, name, vendor, model, available, systems.num, and
node-info section contains information on all Server
and Client Nodes as well as any other nodes used to produce the result.
If multiple nodes use exactly the same configuration and are used for
exactly the same purpose, they may be listed as one single configuration.
The hardware type running this instance configuration and the total number
of instances must be disclosed in the respective
All relevant configuration and tuning information necessary to reproduce
the results must be disclosed in the notes sections of the respective
nodes including JMS Server nodes, Client Nodes and any Database server
nodes that are part of the SUT.
Information about a JMS Server node.
|Unique string to identify the node.
|Name of the node.
|ID of the JMS product used in this node.
|ID of the JVM product used in this node.
|ID of the JDBC product used in this node.
|ID of an additional software product used in this node.
|ID of the hardware system used in this node.
|Number of instances of this node.
|Additional notes about this node (e.g. tuning information).
Information about a JMS Client Node.
|Unique string to identify the Node.
|Name of the Node.
|ID of the JVM product used in this Node.
|Number of JVM instances used in this Node.
|ID of an additional software product used in this Node.
|ID of the hardware system used in this Node.
|Number of instances of this Node.
|Number of Agents in this Node.
|Additional notes about this Node (e.g. tuning information). Any parameters passed to the SatelliteDrivers on the command line must be disclosed.
Information about a database server node if a separate database server is used for message persistence.
|Unique string to identify the node.
|Name of the node.
|ID of the DB product used in this node.
|ID of an additional software product used in this node.
|ID of the hardware system used in this node.
|Number of instances of this node.
|Additional notes about this node (e.g. tuning information).
Information about any other node used that does not fit in the above categories, e.g. a load balancer. Nodes that are not part of the SUT such as the ControlDriver must also be described.
|Unique string to identify the node.
|Name of the node.
|ID of a software product used in this node.
|ID of the hardware system used in this node.
|Number of instances of this node.
|Additional notes about this node (e.g. tuning information). For ControlDriver nodes, any parameters passed on the command line must be disclosed.
The intent of the BOM rules is to enable a reviewer to confirm that the tested configuration satisfies the run rule requirements and to document the components used with sufficient detail to enable a customer to reproduce the tested configuration and obtain pricing information from the supplying vendors for each component of the SUT.
The suppliers for all components must be disclosed. All items supplied by a third party (i.e. not the Test Submitter) must be explicitly stated. Each third party supplier's items must be listed separately.
The Bill of Materials must reflect the level of detail a customer would see on an itemized bill (that is, it must list individual items in the SUT that are not part of a standard package).
For each item, the BOM must include the item's supplier, description, the item's ID (the code used by the vendor when ordering the item), and the quantity of that item in the SUT.
For ease of benchmarking, the BOM may include hardware components that are different from the tested system, as long as the substituted components perform equivalently or better in the benchmark. Any substitutions must be disclosed in the BOM. For example, disk drives with lower capacity or speed in the tested system can be replaced by faster ones in the BOM. However, it is not permissible to replace key components such as CPU, memory or any software.
All components of the SUT (see Section 2.4.1) must be included, including all hardware, software and support.
The software may use term limited licenses (i.e., software leasing), provided there are no additional conditions associated with the term limited licensing. If term limited licensing is used, the licensing must be for a minimum of three years. The three year support period must cover both hardware maintenance and software support.
Additional components such as operator consoles and backup devices must also be included, if explicitly required for the installation, operation, administration, or maintenance, of the SUT.
If software needs to be loaded from a particular device either during installation or for updates, the device must be included.
The exact nature of the support provided must be listed and must at minimum meet the requirements of Section 2.4.1.
A Configuration Diagram of the entire SUT must be provided in JPEG format. The diagram must include, but is not limited to:
The Full Disclosure Archive (FDA) refers to a soft-copy archive of all configuration files and information needed for reproducing the benchmark result. It also includes a copy of the benchmark result files for both the main run and the reproducibility run. The Full Disclosure Archive must be in JAR format.
The source and any relevant configuration files for the Provider Module
used must be included in the
of the FDA.
All steps used to set up the Destinations must be disclosed in a file
destinationSetup.txt. If internal tools have been
used, the equivalent steps using publicly available tools must be disclosed.
All benchmark configuration files under the benchmark
must be included in the FDA.
The entire output directory from the run must be included in the FDA under
The entire output directory from the reproducibility run (see Section
3.4) must be included in the FDA under the
The console output of the ControlDriver and all SatelliteDrivers must
be included in the
DriverOutput sub-directory of the FDA.
All steps needed to set up and configure the JMS Server nodes as well
as any additional nodes they use must be disclosed in a file named
JMSServerConfig sub-directory of the FDA. All relevant
Server configuration files must be included in this directory.
For example, if the JMS Server uses a database server to persist messages, the following information must be provided:
DBSchema sub-directory of