SPECjEnterprise2010 Run and Reporting Rules

Version 1.03

Last Modified: July 30, 2014


Table of Contents

Section 1 - Introduction

Section 2 - Running SPECjEnterprise2010

Section 3 - Reporting Results

Section 4 - Full Disclosure

Appendix A - Isolation Level Definitions


Section 1 - Introduction

The SPECjEnterprise2010 benchmark measures the end-to-end performance of a Java Enterprise Edition (Java EE) application in a global enterprise. It provides a standard for users of web applications to compare the performance of the software and hardware for all levels of the application stack, from the Java EE middleware server, Java Runtime Environment, network and operating system to the database server and storage.

This document specifies how the SPECjEnterprise2010 benchmark is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by SPEC. This ensures that results generated with this benchmark are meaningful, comparable to other generated results, and are repeatable (with documentation covering factors pertinent to duplicating the results).

Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.

The general philosophy behind the rules for running the SPECjEnterprise2010 benchmark is to ensure that an independent party can reproduce the reported results.

For results to be publishable, SPEC expects:

  • Proper use of the SPEC benchmark tools as provided.
  • Availability of all required Full Disclosure files.
  • Support for all of the appropriate standards specified in this document.

SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. However, with the rules below, SPEC wants to increase the awareness by implementers and end users of issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.

  • Hardware and software used to run the SPECjEnterprise2010 benchmark must provide a suitable environment for running typical server-side Java programs. Note that this may be different from a typical environment for client Java programs.
  • Optimizations must generate correct code for a class of programs, where the class of programs must be larger than a single SPEC benchmark.
  • Optimizations must improve performance for a class of programs, where the class of programs must be larger than a single SPEC benchmark.
  • The vendor encourages the implementation for general use.
  • The implementation is generally available, documented and supported by the providing vendor.

Results must be reviewed and accepted by SPEC prior to public disclosure. The submitter must have a valid SPEC license for this benchmark to submit results. Furthermore, SPEC expects that any public use of results from this benchmark shall follow the SPEC Fair Use Policy, SPEC OSG Fair Use Policy and those specific to this benchmark (see the Fair Use section below).

In the case where it appears that these guidelines have been violated, SPEC may investigate and take action in accordance with current policies.

SPEC reserves the right to modify the benchmark codes, workloads, and rules of SPECjEnterprise2010 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees whenever it makes changes to the benchmark and may rename the metrics. In the event that the workload or metric is changed, SPEC reserves the right to republish in summary form adapted results for previously published systems.


Section 2 - Running SPECjEnterprise2010

2.1 Definition of Terms

ACID Properties refers to the Atomicity, Consistency, Isolation and Durability (ACID) properties defined in section 2.10 Transaction Property Requirements

A Business Transaction is a unit of work initiated by the Driver and may involve any combination of RMI based Transactions, Web Service Transactions and Web Interactions.

The Components refers to Java EE elements such as EJBs, JSPs, web services, and Servlets.

The Configuration Diagram is the picture in a common graphics format that depicts the entire configuration (including the SUT, Supplier Emulator, and load drivers). The Configuration Diagram is part of a Full Disclosure. See 4.3 Configuration Diagram

The Cycle Time refers to the time elapsed from the first byte sent by the Driver to request a Business Transaction until the first byte sent by the Driver to request the next Business Transaction. The Cycle Time is the sum of the Response Time and Delay Time.

The Delay Time refers to the time elapsed from the last byte received by the Driver to complete a Business Transaction until the first byte sent by the Driver to request the next Business Transaction. The Delay Time is a function of the Response Time and the Injection Rate. For a required Injection Rate, the Delay Time will be smaller for when Response Times are longer. The driver will adjust the delay to ensure that transactions are delivered in a steady rate.

A Database Transaction (as used is this specification) is a unit of work on the database with full ACID properties. A Database Transaction is initiated by the EJB or an enterprise bean as part of a Business Transaction.

The Dealer Driver refers to the part of the Driver that connects to the Dealer Domain of the Application.

The Deployment Unit refers to a Java EE Server or set of Servers in which the components from a particular domain are deployed.

The Driver refers to the provided client software used to generate the benchmark workload.

The EJB Container is the runtime environment that controls the life cycle of the enterprise beans of the SPECjEnterprise2010 workload. Refer to the EJB 3.0 and JPA 1.0 specifications for details.

Full Disclosure refers to the information that must be provided when a benchmark result is reported. For SPECjEnterprise2010, Full Disclosure includes the Submission File, Full Disclosure Archive ( FDA ), and Configuration Diagram. See 4.2 Full Disclosure Archive.

The Full Disclosure Archive ( FDA ) is an archive of files that is part of a Full Disclosure. It contains the benchmark run results and configuration information necessary to reproduce the results.

The Full Disclosure Report ( FDR ) is the formated benchmark result generated from the Submission File. It includes links to the Full Disclosure Archive ( FDA ) and Configuration Diagram.

The Injection Rate (IR) refers to the rate at which Business Transaction requests are injected into the SUT.

The jEntClient refers to a thread that sends requests to components in a Deployment Unit. A jEntClient does not necessarily map into a single connection to the Server.

The Manufacturing Driver refers to the part of the Driver that connects to the Manufacturing Domain of the Application.

The Measurement Interval refers to a steady state period during the execution of the benchmark for which the test submitter is reporting a performance metric.

A node is the hardware system that runs one or more Java EE application servers or database server instances. For example, a blade enclosure would generally consist of multiple nodes.

Non volatile storage refers a storage device whose contents are preserved when its power is off.

A Relational Database Management System (RDBMS) is a database management system (DBMS) that is based on the relational model.

The Resource Manager is the software that manages a database and is the same as a Database Manager.

The Response Time refers to the time elapsed from when the first transaction in the Business Transaction is sent from the Driver to the SUT until the response from the last transaction in the Business Transaction is received by the Driver from the SUT.

An RMI Transaction is a remote method call on an Enterprise Java Bean of the Application.

The SPECjEnterprise2010 Application (or just " Application ") refers to the implementation of the Components provided for the SPECjEnterprise2010 workload.

The Supplier Emulator refers to the provided software used to emulate the external parts supplier (outside the SUT).

The SPECjEnterprise2010 Kit (or just " Kit ") refers to the complete kit provided for SPECjEnterprise2010. This includes the SPECjEnterprise2010 Application, Driver, Supplier Emulator, load programs, documentation and other supporting files.

The "SPECjEnterprise2010 EjOPS" is the primary SPECjEnterprise2010 metric and denotes the average number of successful jEnterprise Operations Per Second completed during the Measurement Interval. " SPECjEnterprise2010 EjOPS" is composed of the total number of Business Transactions completed in the Dealer Domain, added to the total number of work orders completed in the Manufacturing Domain, normalized per second.

The Submission File is an ASCII file containing the Full Disclosure information specified in 4.1 Submission File. The Submission File is part of a Full Disclosure.

The System Under Test (SUT) comprises all components which are being tested. This includes the Java EE Application Servers, Database Servers, network connections, etc. It does not include the Driver or the Supplier Emulator.

The Test Submitter (or just " Submitter ") refers to the organization that is submitting a benchmark result and is responsible for the accuracy of the submission.

The Web Container is the runtime environment that controls the execution of Servlets and JSPs. Refer to the Java EE 5 or later specifications for details.

A Web Interaction is an HTTP request to the Web-based portion of the Application.

The Web Services Container is the runtime environment that controls the execution of Web Service requests. Refer to the JAX-WS 2.0 specification for details.

A Web Service Transaction is a single web service invocation to a Web Service End point of the Application

2.2 Product Requirements

2.2.1 Hardware Availability

All hardware required to run the SPECjEnterprise2010 Application must be generally available, supported and documented (see the General Availability section for details on general availability rules).

2.2.2 Software Availability

All software required to run the benchmark in the System Under Test (SUT) must be implemented by products that are generally available, supported and documented. These include but are not limited to:

  • Operating System
  • Web Server & Container (used in the Supplier Domain and Supplier Emulator)
  • Java EE Server
  • Java Runtime Environment (JRE)
  • Database Server
  • JDBC Driver

2.2.3 Java EE Compliance

The Java EE server must provide a runtime environment that meets the requirements of the Java Enterprise Edition (Java EE) Version 5 or later specifications during the benchmark run.

A major new version (i.e. 1.0, 2.0, etc.) of a Java EE server must have passed the Java EE 5 or later Compatibility Test Suite (CTS) by the product's general availability date.

A Java EE Server that has passed the Java EE Compatibility Test Suite (CTS) satisfies the Java EE compliance requirements for this benchmark regardless of the underlying hardware and other software used to run the benchmark on a specific configuration, provided the runtime configuration options result in behavior consistent with the Java EE specification.

Comment : The intent of this requirement is to ensure that the Java EE server is a complete implementation satisfying all requirements of the Java EE specification and to prevent any advantage gained by a server that implements only an incomplete or incompatible subset of the Java EE specification.

2.3 Scaling the Benchmark

The throughput of the SPECjEnterprise2010 benchmark is driven by the activity of the Dealer and Manufacturing drivers. The throughput of both applications is directly related to the chosen Injection Rate. To increase the throughput, the Injection Rate needs to be increased. The benchmark also requires a number of rows to be populated in the various tables. The scaling requirements are used to maintain the ratio between the Business Transaction load presented to the SUT, the cardinality of the tables accessed by the Business Transactions, the Injection Rate and the number of jEntClients generating the load.

2.3.1 Database Scaling Rules

Database scaling is defined by the Load Injection Rate( LIR ), which is a step function of the Dealer Injection Rate( IR ). The Load Injection Rate is defined to be:

LIR = (CEILING( IR / step)) * step

where the step is:

step = 10 ^ (INT (LOG(IR))) {log is base 10 }
For example:

IR Step LIR
1-10 1 1,2,3...10
11-100 10 20,30,40...100
101-1000 100 200,300,400...1000
etc.

Comment: The Load Injection Rate is calculated automatically by the database load program from the Injection Rate.

The number of Assemblies to sell is scaled with LIR up to a maximum of 75,000 and is calculated as:

A = MIN(75000,  LIr * 100)

The number of manufacturing Locations is scaled with LIR as:

L = CEILING(100 * LIr / A)

The cardinality (the number of rows in the table) of the C_Site, C_Supplier, S_Site, and S_Supplier tables is fixed. The M_Largeorder table is empty at the start of the run and is populated during the course of the run as noted in table 1. The cardinality of the remaining tables will increase as functions of the IR as depicted in Table 1.

2.3.2 Database Scaling Requirements

The following scaling requirements represent the initial configuration of the tables in the various domains

TABLE 1 : Database Scaling Rules

Table Name Cardinality (in rows) Comments
Orders Domain
O_CUSTOMER 7500 * LIR
O_CUSTOMERINVENTORY (7.5 * 7500 * LIR)1 Average of 7.5 Cars/Customer)
O_ITEM A Each O_ITEM maps to an M_PART referred to as an Assembly. Only Assemblies are sold. A = MIN(75000, LIR * 100)
O_ORDERS 750 * LIR
O_ORDERLINE (2250 * LIR)1 Average of 3 per order)
Manufacturing Domain
M_PARTS ((A + 9.75 * A)1 Assemblies + Average of 10 components per assembly (25% Assemblies share 20% components i.e., 2 components. Shared components are listed only once.)
M_BOM (A * 10)1 Average of 10 Components per Assembly
M_WORKORDER 100 * LIR Load 100*LIR WorkOrders distributed among L locations
M_INVENTORY ((A + 9.75 * A)* L1 L-to-1 to M_PARTS
M_LARGEORDER 0 Insertion rate: 0.07 * IR rows/sec
Supplier Domain
S_SUPPLIER 10
S_COMPONENT 9.75 * A * L1 All components available at all sites
S_SUPP_COMPONENT 9.75 * A * 101 Relationship between supplier and component
S_PURCH_ORDER (.02 * 10 * 100*LIR)1 2% of components required for completing WorkOrders
S_PURCH_ORDERLINE (.1 * 10 * 100*LIR)1 Average of 5 per purchase order

1. These sizes may vary depending on actual random numbers generated.

2.3.3 Scaling the Dealer Application

To stress the ability of the Java EE Server to handle concurrent sessions, the benchmark implements a fixed number of jEntClients equal to 10 * IR where IR is the chosen Injection Rate. The number does not change over the course of a benchmark run.

2.3.4 Scaling the Manufacturing Application

The Manufacturing Application scales in a similar manner to the Dealer Application. Since the goal is just-in-time manufacturing, as the number of orders increase a corresponding increase in the rate at which widgets are manufactured is required. This is achieved by increasing the number of Planned Lines p proportionally to IR as

p = 3 * IR

Where 50% of of the Transactions executed are RMI Transactions and 50% are Web Service Transactions.

2.4 Database Requirements

To satisfy the requirements of a wide variety of customers, the SPECjEnterprise2010 SUT consists of a number of databases and Java EE Servers that can be mapped to nodes as required. The implementation must not, however, take special advantage of the colocation of databases and Java EE Servers, other than the inherent elimination of WAN/LAN traffic.

The three application domains may be combined and deployed on a single application server instance. This means that the benchmark implementer can choose to run a single Deployment Unit that accesses a single database that contains the tables of all the domains. However, a benchmark implementer is free to separate the domains into their Deployment Units, with one or more database instances

The workload is intended to model application performance where the world-wide enterprise that SPECjEnterprise2010 model performs Business Transactions across business domains employing resource managers. The Transaction property requirements described in section 2.10 must be supported to the same standard as that provided by XA-compliant recoverable 2-phase commits (see The Open Group XA Specification: http://www.opengroup.org/public/pubs/catalog/c193.htm) in Business Transactions that span multiple domains.

All tables must have the properly scaled number of rows as defined by the database population requirements.

The database schema must be derived directly from the reference schema scripts provided in the schema/sql directory of the SPECjEnterprise2010 kit. Derived means that the database schema used must match the schema that the scripts describe, not that the scripts must be used to physically generate the database schema. Modifications to the database schema are allowed only as documented below.

Additional database objects may be added to the reference schema and DDL modifications may be made to the reference schema, however all additions and/or modifications must be disclosed along with the specific reason for the addition/modification. The base tables and indexes in the reference schema cannot be replaced or deleted. Views are not allowed. The data types of fields can be modified provided they are semantically equivalent to the standard types specified in the scripts.

Comment : Replacing CHAR with VARCHAR would be considered semantically equivalent. Changing the size of a field (for example: increasing the size of a char field from 8 to 10) would not be considered semantically equivalent. Replacing CHAR with INTEGER (for example: zip code) would not be considered semantically equivalent.

Modifications that a customer may make for compatibility with a particular database server are allowed. Changes may also be necessary to allow the benchmark to run without the database becoming a bottleneck, subject to approval by SPEC. Examples of such changes include:

  • additional indexes on fields used in query predicates,
  • additional fields to support optimistic concurrency control,
  • specifying fields as 'NOT NULL', and
  • horizontally partitioning tables.

Scripts or any other files for schema generation provided by the vendors are for convenience only. They do not constitute the reference or baseline scripts in the schema/sql directory. Deviations from the scripts in the schema/sql directory must still be disclosed in the submission file even though the vendor-provided scripts or any other files for schema generation were used directly.

In any committed state the primary key values must be unique within each table. For example, in the case of a horizontally partitioned table, primary key values of rows across all partitions must be unique.

The databases must be populated using the supplied load programs prior to the start of each benchmark run. That is, after running the benchmark, the databases must be reloaded prior to a subsequent run. Modifications to the load programs are permitted for porting purposes. All such modifications made must be disclosed in the Submission File.

The database must be accessible at all times for external updates (updates via SQL external to the Application Server) and maintain data access transparency.

Data Access Transparency is the property of the system which removes from the application program any knowledge of the location and access mechanisms of partitioned data. An implementation that uses vertical and/or horizontal partitioning must meet the requirements for transparent data access. The system must prevent any data manipulation operation, including external applications, which would result in a violation of the consistency and isolation requirements. An external application must be able to manipulate any set of rows or columns transparently using the specified table names in the schema scripts.

External updates may be assumed to update the database version columns.

2.5 Application Deployment Requirements

The submitter must run the SPECjEnterprise2010 Application provided in the kit without modification, except as permitted by these run rules.

Changes necessary to configure and deploy the benchmark in a specific configuration are permitted to the Java Enterprise Edition 5.0 metadata (configuration files), subject to the requirements of this section. Vendor specific metadata (configuration files) are permitted.

Modifications to persistence.xml and orm.xml consistent with these Run Rules are permitted, but must be provided in the FDA and are subject to review for compliance.

Named query annotations may be overridden in orm.xml provided the new query remains an EJBQL query. Overridden EJBQL queries must return the same data set in the same format as the original query. Replacing EJBQL queries with native queries is not allowed. Exceptions to allow native queries are permitted for queries including "select count(a) from ..." and used exclusively during auditing prior to the ramp-up period, provided they are semantically equivalent.

Comment: The intent of the exception to allow native queries is to reduce the time to perform the auditing counts prior to the run start. There should be no impact to the outcome of the audit tests or the reported benchmark results.

The deployment must assume that the database could be modified by external applications.

A selection of business methods are designed in a way which allows retrying failed invocations of those business methods due to javax.persistence.OptimisticLockExceptions. The (EJB) container may be configured to retry these business method invocations and all methods of the corresponsing session bean with the same name and different method signatures, using a different transaction for each business method invocation:

  • WorkOrderSession.scheduleWorkOrder()
  • WorkOrderSession.updateWorkOrder()
  • WorkOrderSession.completeWorkOrder()

All MDB onMessage() method invocations may be retried.

2.6 Driver Requirements for the Dealer Domain

The dealer domain is exercised using three transaction types:

  • Purchase - makes purchases for new vehicles
  • Manage - manages the customer inventory
  • Browse - browses through the items list that can be purchased

Please refer to the Design Document for definitions of each transaction.

2.6.1 Business Transaction Mix Requirements

Business Transactions are selected by the Driver based on the mix shown in Table 2. The actual mix achieved in the benchmark must be within a 5% range (+/- 2.5%) of the targeted mix for each type of Business Transaction. For example, the browse transactions can vary between 47.5% to 52.5% of the total mix. The Driver checks and reports on whether the mix requirement was met.

TABLE 2 : Business Transaction Mix Requirements

Business Transaction Type Percent Mix
Purchase 25%
Manage 25%
Browse 50%

2.6.2 Response Time Requirements

The Driver measures and records the Response Time of the different types of Business Transactions. Only successfully completed Business Transactions in the Measurement Interval are included. At least 90% of the Business Transactions of each type must have a Response Time of less than the constraint specified in Table 3 below. The average Response Time of each Business Transactions type must not be greater than 0.1 seconds more than the recorded 90% Response Time. This requirement ensures that all users will see reasonable response times. For example, if the 90% Response Time of purchase transactions is 1 second, then the average cannot be greater than 1.1 seconds. The Driver checks and reports on whether the response time requirements were met.

TABLE 3 : Response Time Requirements

Business Transaction 90% RT (in seconds)
Purchase 2
Manage 2
Browse 2

2.6.3 Cycle Time Requirements

For each Business Transaction, the Driver selects cycle times from a negative exponential distribution, computed from the following equation:

Tc = -ln(x) * 10

where:

Tc = Cycle Time 
ln = natural log (base e) 
x  = random number with at least 31 bits of precision, from a uniform distribution such that (0 < x <= 1)

The distribution is truncated at 5 times the mean. For each Business Transaction, the Driver measures the Response Time Tr and computes the Delay Time Td as Td = Tc - Tr. If Td > 0, the Driver will sleep for this time before beginning the next Business Transaction. If the chosen cycle time Tc is smaller than Tr , then the actual cycle time ( Ta ) is larger than the chosen one.

The average actual cycle time is allowed to deviate from the targeted one by 5%. The Driver checks and reports on whether the cycle time requirements were met.

2.6.4 Miscellaneous Requirements

The table below shows the range of values allowed for various quantities in the application. The Driver will check and report on whether these requirements were met.

TABLE 4 : Miscellaneous Dealer Requirements

Quantity Targeted Value Min. Allowed Max. Allowed
Average Vehicles per Order 26.6 25.27 27.93
Vehicle Purchasing Rate (/sec) 6.65 * IR 6.32 * IR 6.98 * IR
Percent Purchases that are Large Orders 10 9.5 10.5
Large Order Vehicle Purchasing Rate (/sec) 3.5 * IR 3.33 * IR 3.68 * IR
Average # of Vehicles per Large Order 140 133 147
Regular Order Vehicle Purchasing Rate (/sec) 3.15 * IR 2.99 * IR 3.31 * IR
Average # of Vehicles per Regular Order 14 13.3 14.7

2.6.5 Performance Metric

The metric for the Dealer Domain is Dealer Transactions/sec , composed of the total count of all Business Transactions successfully completed during the measurement interval divided by the length of the measurement interval in seconds.

2.7 Driver Requirements for the Manufacturing Domain

2.7.1 Response Time Requirements

The Manufacturing Driver measures and records the time taken for a work order to complete. Only successfully completed work orders in the Measurement Interval are included. At least 90% of the work orders must have a Response Time of less than 5 seconds. The average Response Time must not be greater than 0.1 seconds more than the 90% Response Time.

2.7.2 Miscellaneous Requirements

The table below shows the range of values allowed for various quantities in the Manufacturing Application. The Manufacturing Driver will check and report on whether the run meets these requirements.

TABLE 5 : Miscellaneous Manufacturing Requirements

Quantity Targeted Value Min. Allowed Max. Allowed
LargeOrderline Vehicle Rate/sec 3.5 * IR 3.15 * IR 3.85 * IR
Planned Line Vehicle Rate/sec 3.15 * IR 2.835 * IR 3.465 * IR

2.7.3 Performance Metric

The metric for the Manufacturing Domain is Workorders/sec , whether produced on the Planned lines or on the LargeOrder lines.

2.8 Driver Rules

The Driver is provided as part of the SPECjEnterprise2010 kit.

The class files provided in the specjdriver.jar file of the SPECjEnterprise2010 kit must be used as is. No source code recompilation is allowed.

FABAN (http://faban.sunsource.net) is used as the infrastructure on which the specjdriver is built. The version included in the SPECjEnterprise2010 benchmark kit must be used as is. No source code recompilation is allowed.

The Driver must reside on one or more systems that are not part of the SUT.

Comment : The intent of this requirement is that the communication between the Driver and the SUT be accomplished over the network.

The Dealer Driver communicates with the SUT using HTTP. The Dealer Driver uses a single URL to establish a connection with the web tier. If more than one Driver system is used all Driver systems must have the same URL for the Dealer Driver.

The M_Driver communicates with the SUT using both RMI and Web Services accessor methods over a protocol supported by the Java EE Server (RMI/JRMP, RMI/IIOP, RMI/T3, JAX-WS). The M_Driver's RMI and Web Services clients must use single URLs to establish a connection with the EJB or Web Services tier. For RMI access the EJB object stubs invoked by the M_Driver on the Driver systems are limited to data marshaling functions, load-balancing and fail-over capabilities.

Comment : The purpose of the identical URL requirement is to ensure that any load balancing is done by the SUT and is transparent to the Driver systems.

As part of the run, the driver checks many statistics and audits that the run has been properly executed. The driver tests the statistics and audit results against the requirements specified in this document and marks each criteria as "PASS" or "FAIL" in the summary reports. A compliant run must not report failure of any criteria. Only results from compliant runs may be submitted for review and published. Non-compliant runs are not allowed to be published.

Pre-configured Driver decisions, based on specific knowledge of SPECjEnterprise2010 and/or the benchmark configuration, are disallowed.

The Driver systems may not perform any processing ordinarily performed by the SUT, as defined in section 2.12. This includes, but is not limited to:

  • Executing part or all of the SPECjEnterprise2010 Application
  • Caching database or Java EE Server specific data
  • Communicating information to the SUT regarding upcoming transactions

The Driver records all exceptions in error logs. The only expected errors are those related to transaction consistency when a transaction may occasionally rollback due to conflicts (i.e. OptimisticLockExceptions). Any other errors that appear in the logs must be explained in the Submission File.

2.9 Measurement Requirements

The Dealer and Manufacturing Application must be started simultaneously at the start of a benchmark run. The Measurement Interval must be preceded by a ramp-up period of at least 10 minutes at the end of which a steady state throughput level must be reached. At the end of the Measurement Interval, the steady state throughput level must be maintained for at least 5 minutes, after which the run can terminate.

2.9.1 Steady State

The reported metric must be computed over a Measurement Interval during which the throughput level is in a steady state condition that represents the true sustainable performance of the SUT. Each Measurement Interval must be at least 60 minutes long and should be representative of an 24 hour run.

Memory usage must be in a steady state during the Measurement Interval.

At least two database checkpoints or continuous checkpointing must take place during the Measurement Interval. The checkpoint interval must be disclosed in the FDR.

Comment : The intent is that any periodic fluctuations in the throughput or any cyclical activities, e.g. JVM garbage collection, database checkpoints, etc. be included as part of the Measurement Interval.

2.9.2 Reproducibility

To demonstrate the reproducibility of the steady state condition during the Measurement Interval, a minimum of one additional (and non-overlapping) Measurement Interval of the same duration as the reported Measurement Interval must be measured and its "SPECjEnterprise2010 EjOPS" must be equal to or greater than the reported "SPECjEnterprise2010 EjOPS". This reproducibility run's metric is required to be within 5% of the reported "SPECjEnterprise2010 EjOPS".

2.10 Transaction Property Requirements

The Atomicity, Consistency, Isolation and Durability (ACID) properties of transaction processing systems must be supported by the SUT during the running of this benchmark.

2.10.1 Atomicity Requirements

The System Under Test must guarantee that all Transactions are atomic; the system will either perform all individual operations on the data, or will assure that no partially-completed operations leave any effects on the data. The tests described below are used to determine if the System Under Test meets all the transactional atomicity requirements. If any of the tests have a result of "FAILED" then the SUT does not comply with the transaction atomicity requirements of the benchmark.

2.10.1.1 Atomicity Tests

2.10.1.1.1 Atomicity Test 1

This test checks to see if the proper transaction atomicity levels are upheld in transactions associated with the benchmark. This test case drives placing an order for immediate insertion into the dealership's inventory. An exception is raised after placing the order and while adding the inventory to the dealers inventory table. This should cause the transaction changes to be removed from the database and all other items returned to how they existed before the transaction was attempted. This test case consists of the following three steps:

  1. Query the database to check how many inventory items the dealership has, the dealership's account balance, and the number of orders which have been placed for the dealer inside of the dealer domain. These number are the initial metrics that the final step should line up with, after rolling back the transaction.
  2. Drives the above transaction which causes a transaction rollback exception to occur.
  3. Query the database to check how many inventory items the dealership has, the dealership's account balance, and the number of orders which have been placed for the dealer inside of the dealer domain. These numbers should equal those in step 1) for the test case to be successful.

2.10.1.1.2 Atomicity Test 2

This test transaction simply tests that the application server is working properly by inserting an order as in Atomicity test 1 but without causing the exception, verifying that it shows up in the database.

2.10.1.1.3 Atomicity Test 3

This test checks to see if the proper transaction atomicity levels are being upheld in transactions associated with the messaging subsystem. This test case drives placing a order which contains a large order and an item to be inserted immediately into the dealership's inventory. An exception is raised after placing the order while adding the inventory to the dealer's inventory table. This should cause the transaction changes to be removed from the database, messages removed from queue and all other items returned to how they existed before the transaction was attempted. This test case has three steps as follows:

  1. Query the database to check how many inventory items the dealership has, the dealerships account balance, and the number of orders which have been placed for the dealer inside of the dealer domain. Also the large order table is queried to check how many large orders exist in the database before the transaction is attempted. These numbers are the initial metrics that the final step should line up with after rolling back the transaction.
  2. Drives the above listed transaction which causes a transaction rollback exception to occur
  3. Query database to check how many inventory items the dealership has, the dealerships account balance, and the number of orders which have been placed for the dealer inside of the dealer domain. Also query the large order table to check how many large orders there are in the table. These number should equal those in step 1) for the test case to be successful.

2.10.2 Consistency and Isolation Requirements

This section describes the consistency and isolation requirements for Transactional Resources (currently this consists of but is not limited to Database and Messaging). Submitters can choose to implement the requirements in this section by any mechanism supported by the SUT. The isolation levels Strict READ_COMMITTED and Strict REPEATABLE_READ as used in this benchmark are defined in Appendix A.

The Java EE 5 specification defines the transactional requirements for Java EE 5 compliant servers. The transactional consistency and isolation requirements for the application represented by this benchmark are defined in this section of the run rules. Compliant results must satisfy the requirements of both Java EE 5 and these run rules.

2.10.2.1 JMS

For any committed transaction in which a JMS PERSISTENT message is produced (sent or published), the message must eventually be delivered once and only once. If a JMS PERSISTENT message is produced within a transaction which is subsequently rolled back, the message must not be delivered.

A message is considered to have been "delivered" if and only if it is consumed by a Message Driven Bean using a committed container managed transaction.

2.10.2.2 JPA Entities

The SPECjEnterprise benchmark application requires a minimum isolation level of Strict READ_COMMITTED.

Parts of the benchmark application make use of the logical isolation level of Strict REPEATABLE_READ with respect to entities that are updated frequently. This is ensured by the application setting appropriate locks as per JPA 1.0 specification, if the respective entity is not modified or deleted. Remark: By setting appropriate locks or modifying or deleting an entity the JPA 1.0 specification guarantees an isolation level of REPEATABLE_READ. SPECjEnterprise2010 requires an isolation level of Strict REPEATABLE_READ in this case to prevent phantom deletes (see Appendix A.3).

The benchmark application is designed to preserve the referential integrity between entities. For example to maintain integrity between Order and OrderLine the application maintains the references to the Order from the Orderline. The underlying Java EE 5 product implementation used for the benchmark must fully support this referential integrity per the JPA 1.0 specification.

Item entities are assumed to be infrequently updated by an external application only and not the benchmark application itself. Item entity query results and/or row state may be cached, provided that stale information is refreshed with a time-out interval of no more than 20 minutes using a logical isolation level of Strict READ_COMMITTED. In other words, no transaction may commit if it has used data from a stale read and/or row state that was obtained from the database more than 20 minutes previously. The effects of any item insertion, item deletion, or update of any item's details, are thereby ensured to be visible to all transactions that commit 20 minutes later, or thereafter.

Comment: A stale read is defined as the reading of entities or entity state that is older than the point at which the JPA persistence context was started.

Comment: It is possible to provide the applications a reference to an Item entity avoiding copying Item entity data.

Optimistic verification must always be performed against the database.

If a transaction is terminated by successful execution of a commit statement then all changes made to entity data are accessible to all concurrent and subsequent transactions on the database.

2.10.2.2.1 Cache Timeout Test

The cache timeout test is run by the benchmark driver to ensure that specified JPA Entities are reloaded from the database during the benchmark run. There are two parts to the test. The first is conducted prior to ramp-up and the second immediately following the ramp-down period of a benchmark run.

Description of the tests :

  1. Immediately prior to ramp-up: the driver selects a random item entity using JPA.
  2. The price of this item is then incremented by $2000.00 via JDBC
  3. The driver saves a copy of the updated item state for the length of the run.
  4. Immediately following ramp-down the driver retrieves the same item from the database using JPA and verifies that the item has in fact been reloaded from the database.

2.10.2.2.2 External Update Test

The external update test is run by the benchmark driver to ensure that updates to the database which originate from outside (externally) to the SPECjEnterprise benchmark are recognised and correctly handled by the Java EE Application server. The test is run immediately prior to the ramp-up period.

Description of the tests :

  1. A customer JPA Entity is selected via JPA
  2. The "BALANCE" field of the customer is incremented via JDBC to simulate a direct database update from some other application external to the benchmark.
  3. The customer is then immediately retrieved via JPA (in a new transaction) and the "BALANCE" field of the JPA entity is compared to ensure that the JDBC database update is reflected in the JPA Entity.

2.10.2.2.3 JPA Write Through Test

The JPA write through test is run by the benchmark driver to ensure that data is written from the JPA Entities to the database by the Java EE Application server. The test is run by the driver immediately prior to ramp-up.

Description of the tests :

  1. The driver selects an order entity using JPA.
  2. The "TOTAL" field of this item is then incremented by $3000.00 via JPA.
  3. In a new transaction and with a new database connection the same order is read directly from the database via JDBC to check that the update to the "TOTAL" field is reflected on the database.

2.10.2.2.4 Phantom Delete Test

The phantom delete test is run by the benchmark driver to test if the phantom delete phenomena as defined in Appendix A could occur. The test is run by the driver immediately prior to ramp-up.

Description of the tests :

  1. A customer is selected using JPA.
  2. The Dealer Driver starts in parallel to the main thread an additional thread. This thread executes the following steps : -
    a. The customer is deleted using JPA.
    b. The changes are flushed to the database using JPA.
    c. Wait a certain amount of time (10 sec) till Dealer Driver main thread has finished step 4.
    d. Transaction is rolled back to preserve the customer in the database.
  3. Wait a certain amount of time (5 sec) till additional thread has reached step 2c.
  4. The existence of the customer is verified, ensuring that phantom deletes are prevented.

Like other tests, due to the chronological dependencies, passing this test is necessary but not sufficient to guarantee that no Phantom Deletes could occur.

2.10.2.3 Database

All transactions must take the database from one consistent state to another. All transactions must have an isolation level of Strict READ_COMMITTED or higher; i.e., dirty reads and phantom deletes are not allowed.

If an entity is deployed with a logical isolation of Strict READ_COMMITTED, and if that entity is not changed in a given Database Transaction, then the Java EE Server must not issue a database update that would have the effect of losing any external updates that are applied while the Database Transaction is executing. If the Java EE Server does not have the ability to suppress unnecessary updates that could interfere with external updates, then all entities must be deployed using the Strict REPEATABLE_READ isolation level (or higher).

On an entity with an isolation level of Strict REPEATABLE_READ, optimizations to avoid database updates to an entity that has not been changed in a given transaction are not valid if the suppression of updates result in an effective isolation level lower than Strict REPEATABLE_READ. Additionally, if the Java EE Server pre-loads entity state while executing finder methods (to avoid re-selecting the data), the mechanism normally used to ensure Strict REPEATABLE_READ must still be effective, unless another mechanism is provided to ensure Strict REPEATABLE_READ in this case. For example, if SELECT FOR UPDATE would normally be used at select time, then SELECT FOR UPDATE should be used when executing those finder methods which pre-load entity state.

2.10.3 Durability Requirements

Transactions must be durable from any single point of failure on the SUT.

Comment : Durability from a single point of failure can be achieved by ensuring that the database and application server transactional logs can withstand failure at a single point. This is typically achieved by mirroring the logs onto separate non volatile storage.

Comment : Using cached disk for transactional logs is allowed as long as the disk device has a battery backup capable of meeting the 24 hour run requirement consistent with section 2.9.1.

Comment : Configurations where the logs and mirror devices are housed in the same disk array are accepted as being durable as long as all other durability criteria are met.

Comment : The word "disk" is used to represent non volatile storage, these run rules apply equally to other non volatile storage such as flash devices.

2.11 Supplier Emulator Requirements

The Supplier Emulator is provided as part of the SPECjEnterprise2010 Kit and can be deployed on any Web Server that supports Servlets 2.1 or higher.

The Supplier Emulator must reside on a system that is not part of the SUT. The Supplier Emulator may reside on one of the Driver systems.

Comment : The intent of this section is that the communication between the Supplier Emulator and the SUT be accomplished over the network.

2.12 System Under Test (SUT) Requirements

The SUT comprises all components which are being tested. This includes network connections, Web Servers, Java EE Application Servers, Database Servers, etc. The Web Server must support HTTP v1.1.

2.12.1 SUT Components

The SUT consists of:

  • The host system(s) (including hardware and software) required to support the Workload and databases.
  • All network components (hardware and software) between host machines which are part of the SUT and all network interfaces to the SUT.
  • Components which provide load balancing within the SUT.

  • Any software that is required to build and deploy the SPECjEnterprise2010 Application.

Comment 1 : Any components which are required to form the physical TCP/IP connections (commonly known as the NIC, Network Interface Card) from the host system(s) to the client machines are considered part of the SUT.

Comment 2 : A basic configuration consisting of one or more switches between the Driver and the SUT is not considered part of the SUT. However, if any software/hardware is used to influence the flow of traffic beyond basic IP routing and switching, it is considered part of the SUT. For example, if DNS Round Robin is used to implement load balancing, the DNS server is considered part of the SUT and therefore it must not run on a driver client.

2.12.2 Database Services

The SUT services HTTP requests and remote method calls from the Driver and returns results generated by the SPECjEnterprise2010 Application which may involve information retrieval from a RDBMS. The database must be accessible via JDBC.

2.12.3 Storage

The SUT must have sufficient on-line disk storage to support any expanding system files and the durable database population resulting from executing the SPECjEnterprise2010 Business Transaction mix for 24 (twenty four) hours at the reported "SPECjEnterprise2010 EjOPS".

2.13 Auditing

To ensure that SPECjEnterprise2010 results are correctly obtained and requirements are met, the driver will make explicit audit checks by calling auditing components on the SUT. These audit checks are necessary and all tests must pass but it is not sufficient to ensure that the benchmark run is in compliance with these run and reporting rule. Passing the audit checks does not guarantee compliance with all of the run and reporting rules.

These tests are designed and intended to run both prior to ramp-up and also to run immediately following the end of the ramp-down period and it is a requirement that no mechanism is used to delay the start or end of the auditing checks. The driver will include audit results with the run results. The table below lists the individual auditing activities the driver performs, the pass/fail criteria, and the specific purpose of a particular audit:

TABLE 6 : Audit List DescriptionPurposeCriteria

Audit test # Description Purpose Criteria
1 Check initial database cardinalities Ensures proper database loading Database is loaded according to section 2.3.1, Database Scaling Rules
2 Check work orders from planned line transaction count against database Ensures that all successful work orders from the planned line have been persisted to the database WorkOrder planned line driver count <= WorkOrder planned line DB Count
3 Check component replenishment Ensures depleted components are timely replenished, limits the number of non-replenished components depleted component count <= 36 * IR
4 Check that deliveries are being made Ensure that the supplier emulator is working and keeping up by ensuring that purchase orders without deliveries are limited PurchaseOrder lines without deliveries < 10% of PurchaseOrdeeLines added to DB
5 Check new order driver transaction count against database Ensures that all successful new orders have been persisted to the database NewOrder driver count <= NewOrders added to DB during run
6 Check LargeOrder Processing Ensures LargeOrders are processed without excessive backlog pending LargeOrder Count <= IR * 25
7 Check run properties Ensures specified run time properties are set correctly itemsPerTxRate=100, maxItemsPerLoc=75000
8 Perform atomicity tests Ensures atomicity requirements as documented in section 2.10.1.1 are fulfilled See section 2.10.1.1
9 Perform cache timeout test Ensures cached JPA Entity beans are reloaded from the database See section 2.10.2.2.1
10 Perform external update test Ensures external updates are immediately visible to the Java EE Application server See section 2.10.2.2.2
11 Perform JPA Write-through test Ensures updates to JPA Entities are persisted to the database See section 2.10.2.2.3
12 Perform Phantom Delete test Ensures that the Database and Java enterprise server prevent phantom deletes See section 2.10.2.2.4


Section 3 - Reporting Results

3.1 SPECjEnterprise2010 Performance Metric

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS").

The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain as:

SPECjEnterprise2010 EjOPS = _Dealer Transactions_ /sec + Workorders/sec

All reported "SPECjEnterprise2010 EjOPS" must be measured, rather than estimated, and expressed to exactly two decimal places, rounded up to the hundredth place. For example, if a measurement yielded 123.45 "SPECjEnterprise2010 EjOPS", this must be reported as 123.45 SPECjEnterprise2010 EjOPS.

3.2 Required Reporting

Graphs of the transaction throughput versus elapsed time must be reported for the Dealer and Manufacturing applications for the entire test run. The x-axis represents the elapsed time from the start of the run. The y-axis represents the throughput in Business Transactions. At least 60 different intervals must be used with a maximum interval size of 30 seconds. The start and end of the Measurement Interval must also be reported. An example of such graphs is shown below.

3.3 Benchmark Optimization Rules

Benchmark specific optimization is not allowed. Any optimization of either the configuration or products used on the SUT must improve performance for a larger class of workloads than that defined by this benchmark and must be supported and recommended by the provider. Optimizations must have the vendor's endorsement and should be suitable for production use in an environment comparable to the one represented by the benchmark. Optimizations that take advantage of the benchmark's specific features are forbidden.

An example of an inappropriate optimization is one that requires access to the source code of the application.

Comment : The intent of this section is to encourage optimizations to be done automatically by the products.

3.4 General Availability

All hardware and software used must be orderable by customers. For any product not already generally released, the Submission File must include a committed general delivery date. That date must be within 3 months of the result's publication date. However, if Java and/or Java EE related licensing issues cause a change in software availability date after publication date, the change will be allowed to be made without penalty, subject to subcommittee review.

Comment 1 : The purpose of including general availability requirements is to ensure that the systems and their hardware and software components actually represent real products that solve real business and computational problems. Detailed Guidelines for General Availability are provided in SPEC Open Systems Group Policy Appendix C.

If a new or updated version of any software product is released causing earlier versions of said product to no longer be supported or encouraged by the providing vendor(s), new publications or submissions occurring after four complete review cycles have elapsed must use a version of the product encouraged by the providing vendor(s).

For example, with result review cycles ending April 16, April 30th, May 14th, May 28th, June 11th, and June 25th, if a new JDK version released between April 16th and April 29th contains critical fixes causing earlier versions of the JDK to no longer be supported or encouraged by the providing vendor(s), results submitted or published on June 25th must use the new JDK version.

All products used must be the proposed final versions and not prototypes. When the product is finally released, the product performance must not decrease by more than 5% of the published "SPECjEnterprise2010 EjOPS". If the submitter later finds the performance of the released system to be 5% lower than that reported for the pre-release system, then the submitter is required to report a corrected test result.

Comment 2 : The intent is to test products that customers will use, not prototypes. Beta versions of products can be used, provided that General Availability (GA) of the final product is within 3 months. If a beta version is used, the date reported in the results must be the GA date.

Comment 3 : The 5% degradation limit only applies to a difference in performance between the tested product and the GA product. Subsequent GA releases (to fix bugs, etc.) are not subject to this restriction.

3.5 Fair Use of Results

In order to publicly disclose SPECjEnterprise2010 results, the submitter must adhere to these fair use and reporting rules in addition to having followed the run rules described in this document.

SPECjEnterprise2010 Results must be reviewed and accepted by SPEC prior to public disclosure.

Any public use of results from this benchmark must also follow the SPEC Fair Use Policy, SPEC OSG Fair Use Policy and those specific to this benchmark.

3.5.1 Result Disclosure and Submission

Compliant runs need to be submitted to SPEC for review and must be accepted prior to public disclosure. Submissions must include the Submission File, a Configuration Diagram, and the Full Disclosure Archive for the run (see section 4).

The goal of the reporting rules is to ensure the system under test is sufficiently documented such that someone could reproduce the test and its results.

See section 5.3 of the SPECjEnterprise2010 User Guide for details on submitting results to SPEC.

Public disclosure of compliant SPECjEnterprise2010 results reviewed and accepted for publication by SPEC must always be quoted using the performance metric "SPECjEnterprise2010 EjOPS".

Test results that have not been accepted and published by SPEC must not be publicly disclosed except as noted in Section 3.6, Research and Academic Usage. Research and academic usage test results that have not been accepted and published by SPEC must not use the SPECjEnterprise2010 metric ("SPECjEnterprise2010 EjOPS").

3.5.2 Estimates

Estimates are not allowed.

3.5.3 Comparison to Other Benchmarks

SPECjEnterprise2010 results must not be publicly compared to results from any other benchmark.

3.6 Research and Academic Usage

SPEC encourages use of the SPECjEnterprise2010 benchmark in academic and research environments. The researcher is responsible for compliance with the terms of any underlying licenses (Application Server, DB Server, hardware, etc.).

3.6.1 Research and Academic Usage - Restrictions

It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of licensees submitting to the SPEC web site. SPEC encourages researchers to obey as many of the run rules as practical, even for informal research.

If research results are being published, SPEC requires:

  • The publication must not use the SPECjEnterprise2010 metrics ("SPECjEnterprise2010 EjOPS") unless the result has been reviewed and accepted by SPEC. Renaming or re-computation of the primary metrics is also not allowed.
  • The publication must clearly document any deviations from the SPECjEnterprise2010 Run and Reporting Rules.
  • The research results must not be compared with the results published on the SPEC web site.
  • The research results must not be compared with any other benchmark results.
  • In any publication where results will be compared between two competing products, SPEC expects that the Fair Use Guidelines be followed with regard to any claims made (see section 3.5).
  • If this project is sponsored/paid for by a third party, this must be disclosed.
  • The publication must clearly document any deviations from the SPECjEnterprise2010 Run and Reporting Rules.

SPEC reserves the right to require a full disclosure of any published results.

3.6.2 Research and Academic Usage - Disclosure

Public use of SPECjEnterprise2010 benchmark results are bound by the SPEC OSSC Fair Use Guidelines and the SPECjEnterprise2010 specific Run and Reporting Rules (this document). All publications must clearly state that these results have not been reviewed or accepted by SPEC using text equivalent to this:

SPECjEnterprise is a trademark of the Standard Performance Evaluation Corp. (SPEC). The SPECjEnterprise2010 results or findings in this publication have not been reviewed or accepted by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result. The official web site for SPECjEnterprise2010 is located at http://www.spec.org/osg/Enterprise2010.

This disclosure must precede any results quoted from the tests. It must be displayed in the same font as the results being quoted.


Section 4 - Full Disclosure

A Full Disclosure is required in order for results to be compliant with the SPECjEnterprise2010 Run and Reporting Rules. For SPECjEnterprise2010, Full Disclosure includes the Submission File, Full Disclosure Archive ( FDA ), and Configuration Diagram. The requirements for each of these are described in this section.

A Full Disclosure Report ( FDR ) is generated from the Submission File. The FDR is the format used to display results on the SPEC web site. The Full Disclosure Report ( FDR ) in HTML format includes links to the Configuration Diagram and the Full Disclosure Archive.

Comment 1 : The intent of this disclosure is to be able to replicate the results of a submission of this benchmark given the equivalent hardware, software, and documentation.

Comment 2 : In the sections below, when there is no specific reference to where the disclosure must occur, it must occur in the Submission File. Disclosures in the Archive are explicitly called out.

4.1 Submission File

The Submission File is an ASCII file containing the information specified in this section.

An example of the Submission File is included in the kit, see "reporter/sample/Submission.txt"

4.1.1 Software

4.1.1.1 Software Products

All commercially available software products must be identified in the system.sw.* sections of the Submission File, along with the product name and version, vendor, and availability date. These products include, but are not limited to:

  • Java EE Servers
  • Web Servers used for the emulator, if not a Java EE server
  • J2SE/JVM products
  • Database management systems
  • JDBC driver products

Additional information required for specific product types are specified in the subsections below.

4.1.1.1.1 Java EE Servers

In addition to the standard information required for all software products, additional information on the Java EE servers is required in the system.sw.JEE[#].* sections as follows:

  • Date the Java EE server passed the CTS certification must be disclosed in the system.sw.JEE[#].date_passed_CTS field.
  • The CTS version this product is certified on must be disclosed in the system.sw.JEE[#].CTS_version field. Allowed versions for SPECjEnterprise2010 are 5.0 or later. Minor versions of these CTS tests are allowed.
  • The network protocol this Java EE product uses for communication between the Java EE application clients and the EJB container inside the Java EE server must be disclosed in the system.sw.JEE[#].EJB_protocol field.

4.1.1.1.2 J2SE/JVMs

The submitter must disclose the Operating System and hardware architecture this JVM is built for. In addition, 64-bit or 32-bit version for the JVM must be indicated. If the same binary image is used for many Operating Systems, one product may be declared with the system.sw.JVM[#].os field set to the architecture the binary image is built for.

4.1.1.1.3 Other software

All other software must provide a description how it is used in the benchmark. The description must be provided in the system.sw.other[#].description field.

4.1.1.2 Software Instances

All information on software instance configurations must be disclosed in system.sw sections of the Submission File. If multiple instances use exactly the same configuration and are used for exactly the same purpose, they may be listed as one single configuration. The hardware type running this instance configuration and the total number of instances must be disclosed in the system.sw.<type>.config[#].hw_type and system.sw.<type>.config[#].instances sections, consequently.

4.1.1.2.1 Java EE Server Instances in the SUT

Product references and configuration and tuning information must be disclosed in the system.sw.JEE[#] sections of the Submission File. These include references to the product sections for the Java EE Server itself and all other products used inside the Java EE Server, including, but not limited to the JVM, the JDBC driver(s), and any other products such as external CMP persistence managers that are not part of the Java EE Server product.

Moreover, the modules that this instance configuration deploys and runs must be disclosed in the system.sw.JEE[#].web.* and system.sw.JEE[#].EJB.* sections by marking the modules as true or false.

4.1.1.2.2 Emulator Instances

Product references and configuration and tuning information must be disclosed in the system.sw.Emulator[#] sections of the Submission File. These include references to the product sections, either a Java EE Server or a Web Server product and all other products used inside the Emulator instance.

4.1.1.2.3 Database Server Instances

The database server products, instance configurations, and database tuning information necessary to recreate the results must be disclosed in the system.sw.DB[#] sections of the Submission File.

4.1.1.2.4 Driver Instances

Each driver agent must be listed in the system.sw.driver[#] sections of the Submission File and all tuning on each agent must be documented in the tuning fields of this section.

JVM instances and tuning used for the launchers, the RMI registry, the controller, and the driver must be disclosed in the system.hw.notes section of the Submission File.

4.1.1.2.5 Other Software Instances

Software not used as part of the Java EE Server instances in the SUT or the Emulator instances but needed on the system, except the operating system itself, must be listed in the sections system.sw.other[#].* . A reference to the software product and instance tuning information must be disclosed as part of these sections.

4.1.1.3 Miscellaneous Disclosures

In addition to product information and instance configuration information, certain information about the software deployment, configuration, and tuning must be provided as listed in the following subsections.

4.1.1.3.1 Database Character Set

The character set used to store character strings in the database must be disclosed. For example, ASCII, ISOLATIN1 (ISO8859-1), UTF-8, UTF-16.

4.1.1.3.2 Load Orders Injection Rate

The Dealer Injection Rate used to load the database(s) must be disclosed in the benchmark.load.injection_rate section of the Submission File.

4.1.1.3.3 Schema Modifications

If the schema was changed from the reference one provided in the Kit (see the Database Requirements section), the reason for the modifications must be disclosed in the benchmark.schema_modifications section of the Submission File.

4.1.1.3.4 Load Program Modifications

If the load program was changed from the reference on provided in the Kit, the reason for the modifications must be disclosed in the benchmark.load_program_modifications section of the Submission File.

4.1.1.3.5 Isolation Requirements

The method used to meet the isolation requirements in section 2.10.2 must be disclosed in the benchmark.isolation_requirement_info section of the Submission File.

4.1.1.3.6 Durability Requirements

The method used to meet the durability requirements in section 2.10.3 must be disclosed in the benchmark.durability_requirement_info section of the Submission File.

4.1.1.3.7 Explanation of Errors

Any errors that appear in the Driver error logs must be explained in the notes section of the Submission File.

4.1.2 Hardware

4.1.2.1 Hardware Description

The number and types of systems used must be disclosed in the system.hw[#] section of the Submission File. The following information is required for each system configuration:

  • Label (a free text description of the purpose of the hardware)
  • Whether this hardware configuration is part of the SUT or not; driver and emulator configurations must have this field set to "false."
  • Vendor and model number
  • System availability date
  • Operating system (product name, vendor, and availability date)
  • CPU (processor type, number and speed (MHz/GHz) of the CPUs)
  • Cache (L1, L2, and "other")
  • Memory Amount (GB)
  • # and size of DIMMS
  • Memory Details - any configuration details that may affect performance, e.g. interleaving and access time.
  • Disks and file system used
  • Network interface
  • All software configurations and instances running on this hardware
  • Number of systems with this exact same configuration

Note: the system availability date, # and size of DIMMS, and memory details are not required for systems which are not part of the SUT.

4.1.2.2 Storage Requirements

The method used to meet the storage requirements of section 2.12.3 must be disclosed in the benchmark.storage_requirement_info section of the Submission File.

4.1.3 Network

4.1.3.1 Network Optimization

If any software/hardware is used to influence the flow of network traffic beyond basic IP routing and switching, the additional software/hardware and settings (see section 2.12) must be disclosed in the benchmark.other section of the Submission File.

4.1.3.2 Network Bandwidth

The bandwidth of the network used in the tested configuration must be disclosed in the benchmark.other section of the Submission File.

4.1.3.3 Network Protocol

The protocol used by the Driver to communicate with the Manufacturing domain on the SUT (the driver communicates directly with the Manufacturing domain's EJBs, it does not use the web interface) must be disclosed in the system.sw.JEE_Server.protocol section of the Submission File.

4.1.3.4 Load Balancing

The hardware and software used to perform load balancing must be disclosed in the benchmark.other section of the Submission File. If the driver systems perform any load-balancing functions as defined in the Driver Rules section, the details of these functions must also be disclosed.

4.1.4 Benchmark Run Results

4.1.4.1 Benchmark Version

The version number of the SPECjEnterprise2010 Kit used to run the benchmark must be included in the Submission File.

4.1.4.2 Reproducibility Run "SPECjEnterprise2010 EjOPS"

The "SPECjEnterprise2010 EjOPS" from the reproducibility run (see Reproducibility section) must be disclosed in the result.reproducibility_run.ejops field of the Submission File.

4.1.5 Bill of Materials (BOM)

The Bill of Materials, which contains the hardware and software used in the SUT, must be disclosed in the bom.* section of the Submission File.

The intent of the BOM is to enable a reviewer to confirm that the tested configuration satisfies the run rule requirements and to document the components used with sufficient detail to enable a customer to reproduce the tested configuration and obtain pricing information from the supplying vendors for each component of the SUT.

4.1.5.1 BOM Suppliers

The suppliers for all components must be disclosed. All items supplied by a third party (i.e. not the Test Submitter) must be explicitly stated. Each third party supplier's items must be listed separately.

4.1.5.2 BOM Level of Detail

The Bill of Materials must reflect the level of detail a customer would see on an itemized bill (that is, it should list individual items in the SUT that are not part of a standard package).

For each item, the BOM should include the item's supplier, description, the item's ID (the code used by the vendor when ordering the item), and the quantity of that item in the SUT.

4.1.5.3 BOM Hardware Component Substitution

For ease of benchmarking, the BOM may include hardware components that are different from the tested system, as long as the substituted components perform equivalently or better in the benchmark. Any substitutions must be disclosed in the BOM. For example, disk drives with lower capacity or speed in the tested system can be replaced by faster ones in the BOM. However, it is not permissible to replace key components such as CPU, memory or any software.

4.1.5.4 BOM SUT

All components of the SUT (see section 2.12.1) must be included, including all hardware, software, and support for a three year period.

All hardware components included must be new and not reconditioned or previously owned. The software may use term limited licenses (i.e., software leasing), provided there are no additional conditions associated with the term limited licensing. If term limited licensing is used, the licensing must be for a minimum of three years. The three year support period must cover both hardware maintenance and software support.

The number of users for SPECjEnterprise2010 is 13 * IR (where 10 * IR are Internet users and 3 * IR are Intranet users). Any usage based licensing for the above number of users should be based on the licensing policy of the company supplying the licensed component.

4.1.5.5 BOM Additional Components

Additional components such as operator consoles and backup devices must also be included, if explicitly required for the installation, operation, administration, or maintenance, of the SUT.

If software needs to be loaded from a particular device either during installation or for updates, the device must be included.

4.1.5.6 BOM Support

Hardware maintenance and software support must include 7 days/week, 24 hours/day coverage, either on-site, or if available as standard offering, via a central support facility.

If a central support facility is utilized, then all hardware and software required to connect to the central support must be installed and functional on the SUT during the measurement run and included.

The response time for hardware maintenance requests must not exceed 4 hours on any component whose replacement is necessary for the SUT to return to the tested configuration.

The use of spares in lieu of the hardware maintenance requirements is allowed if the part to be replaced can be identified as having failed by the customer within 4 hours. An additional 10% of the designated part, with a minimum of 2, must be included. A support service for the spares which provides replacement on-site within 7 days must also be included for the support period.

Comment : The use of spares is intended to assist in complying with the 4-hour maximum hardware maintenance response requirement. It cannot be a substitute for maintenance support, as the configuration documented in the BOM must maintain the same quantities of components, including spares, for 3 years.

Software support requests must include problem acknowledgment within 4 hours.

Comment: Customers may have to flag the request with the appropriate severity to ensure acknowledgement within 4 hours.

No additional charges will be incurred for the resolution of software defects. Problem resolution for more than 10 non-defect problems per year is permitted to incur additional charges. Software support must include all available maintenance updates over the support period.

4.2 Full Disclosure Archive

The Full Disclosure Archive (FDA) must be in ZIP, TAR or JAR format.

4.2.1 Database Configuration

4.2.1.1 Database Logical Volume Configuration Files and Scripts

All scripts/programs and configuration files used to create any logical volumes for the database devices must be included as part of the FDA. The distribution of tables and logs across all media must be explicitly depicted.

4.2.1.2 Database Table/Index Configuration Files and Scripts

All table definition statements and all other statements used to set-up the database must be disclosed as part of the FDA. The configuration files or scripts used to create the database should be included in the "Schema" sub-directory.

4.2.1.3 Database Load Program

If the load programs in the SPECjEnterprise2010 kit were modified (see the Database Requirements section), all such modifications must be disclosed in the benchmark.load_program_modifications section of the Submission File and the modified programs must be included in the FDA.

4.2.2 Benchmark Configuration

4.2.2.1 Deployment Descriptors

All deployment descriptors used must be included in the FDA under the "Deploy" sub-directory. The deployed EAR file must also be in the "Deploy" directory.

Any vendor-specific tools, flags or properties used to perform ejbStore optimizations that are not transparent to the user must be disclosed in the system.sw.JEE_Server.tuning section of the Submission File.

4.2.2.2 EJB Deployment

All steps used to build and deploy the SPECjEnterprise2010 EJBs must be disclosed in a file called "deployCmds.txt" within the "Deploy" sub-directory of the FDA.

4.2.2.3 Driver/Launcher

The input parameters to the Driver must be disclosed by including the following files used to run the benchmark in the FDA:

  • config/run.properties,
  • config/<appserver>.env files, and
  • bin/driver.sh (or bin/driver.bat) script

If the Launcher package was modified, its source must be included in the FDA.

4.2.3 Benchmark Run Results

4.2.3.1 Final Run Output Directory

The entire output directory from the run must be included in the FDA under the "FinalRun" sub-directory.

4.2.3.2 Reproducibility Run Output Directory

The entire output directory from the reproducibility run (see Reproducibility section) must be included in the FDA under the "RepeatRun" sub-directory.

4.2.3.3 WorkOrder Throughput Graph

A graph, in PNG, JPEG or GIF format, of the workorder throughput versus elapsed time (see Required Reporting section) must be included in the FDA under the "FinalRun" sub-directory.

4.3 Configuration Diagram

A Configuration Diagram of the entire configuration (including the SUT, Supplier Emulator, and load drivers) must be provided in PNG, JPEG or GIF format. The diagram should include, but is not limited to:

  • Number and type of processors.
  • Number of LAN (e.g., Ethernet) connections, including routers, etc., that were physically used in the benchmark run.
  • The type and the run-time execution location of software components (e.g., Java EE Server, DBMS, benchmark driver processes, software load balancers, etc.).
  • A clear indication of which components are part of the SUT.


Appendix A - Isolation Level Definitions

A.1 Isolation Level Phenomena

The various isolation levels are described in ANSI SQL and J2SE documentation for java.sql.Connection. ANSI SQL defines isolation levels in terms of phenomena (P1, P2, P3) that are or are not possible under a specific isolation level. The interpretations of P1, P2, P3 and PD used in this benchmark are as follows:

P1 (Dirty read)
Transaction T1 modifies a row. Another transaction T2 then reads that row and obtains the modified value, before T1 has completed a COMMIT or ROLLBACK. Transaction T2 eventually commits successfully; it does not matter whether T1 commits or rolls back and whether it does so before or after T2 commits

Note: a transaction "obtains the modified value" of a row if the modified value is used inside the database server during "selection" (predicate evaluation, i.e. evaluation of the "where" clause) or if the value is used during "projection" (i.e. determining the column list to be returned to the application).

P2 (Non-repeatable read)
Transaction T1 reads a row. Another transaction T2 then modifies or deletes that row, before T1 has completed a COMMIT. Both transactions eventually commit successfully.

P3 (Phantom read)
Transaction T1 reads the set of rows N that satisfy some search condition. Transaction T2 then generates one or more rows that satisfy the search condition used by T1, before T1 has completed a COMMIT. Both transactions eventually commit successfully.

PD (Phantom delete)
Transaction T1 deletes one or more rows. Before T1 commits or rolls back, another transaction T2 then reads the set of rows N that satisfy some search condition, but the deleted rows are excluded from N even if they would have satisfied the search condition. Transaction T2 eventually commits successfully; it does not matter whether T1 commits or rolls back and whether it does so before or after T2 commits.

A.2 Strict READ_COMMITTED

An isolation level of Strict READ_COMMITTED is defined to disallow P1 and PD but allow P2 and P3.

A.3 Strict REPEATABLE_READ

An isolation level of Strict REPEATABLE_READ is defined to disallow P1, PD and P2 but allow P3.

The ANSI SQL definition for REPEATABLE_READ disallows P2. Disallowing P2 also disallows the anomaly known as Read Skew. Read Skew arises in situations such as the following:

  • there is an integrity constraint between X and Y
  • transaction T1 reads X
  • transaction T2 modifies X and Y to new values and commits successfully before T1 has completed
  • T1 now reads Y and sees a state that violates the constraint between X and Y
  • T1 eventually commits successfully

If P2 is disallowed, transaction T2 upon trying to modify X, would either be blocked until T1 completes OR it would be allowed to proceed but transaction T1 would eventually have to be rolled back.

Disallowing P2 also disallows the anomaly known as Write Skew. Write Skew arises in situations such as the following:

  • there is an integrity constraint between X and Y
  • transaction T1 reads X and Y
  • transaction T2 reads X and Y, writes X, and commits successfully
  • transaction T1 then writes Y and also commits successfully

If P2 is disallowed, either T1 or T2 would have to be rolled back.


Product and service names mentioned herein may be the trademarks of their respective owners.

Copyright © 2001-2012 Standard Performance Evaluation Corporation
All Rights Reserved