SPEC logo

SPECjAppServer2001 Run and Reporting Rules

Version 1.23
October 18, 2002

Table of Contents

Section 1 - Introduction

Section 2 - Running SPECjAppServer2001

Section 3 - Reporting Results

Section 4 - Pricing

Section 5 - Full Disclosure

Appendix A - SPECjAppServer2001 Transactions

Appendix B - Centralized Workload Category Examples

Appendix C - Challenging Results Based on Supplied Pricing Information

Appendix D - Run Rules Document Change Log



Section 1 - Introduction

This document specifies how the SPECjAppServer2001 benchmark is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by SPEC. This ensures that results generated with this benchmark are meaningful, comparable to other generated results, and are repeatable (with documentation covering factors pertinent to duplicating the results).

Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.

The general philosophy behind the rules for running the SPECjAppServer2001 benchmark is to ensure that an independent party can reproduce the reported results.

For results to be publishable, SPEC expects:

SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. However, with the rules below, SPEC wants to increase the awareness by implementers and end users of issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.

Results must be reviewed and approved by SPEC prior to public disclosure. The submitter must have a valid SPEC license for this benchmark to submit results. Furthermore, SPEC expects that any public use of results from this benchmark shall follow the SPEC OSG Fair Use Policy and those specific to this benchmark (see section 3.7.3 below). In the case where it appears that these guidelines have been violated, SPEC may investigate and request that the offense be corrected or the results resubmitted.

SPEC reserves the right to modify the benchmark codes, workloads, and rules of SPECjAppServer2001 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees whenever it makes changes to the benchmark and may rename the metrics. In the event that the workload or metric is changed, SPEC reserves the right to republish in summary form adapted results for previously published systems.


Section 2 - Running SPECjAppServer2001

2.1 Definition of Terms

The term Deployment Unit refers to an Enterprise Java Bean (EJB) Container or set of Containers in which the Beans from a particular domain are deployed.

The term ECclient refers to a thread or process that holds references to EJBs in a Deployment Unit. An ECclient does not necessarily map into a TCP connection to the Container.

The term SPECjAppServer2001 Reference Beans refers to the implementation of the EJBs provided for the SPECjAppServer2001 workload.

The term SPECjAppServer2001 Kit refers to the complete kit provided for SPECjAppServer2001. This includes the SPECjAppServer2001 Reference Beans, the Driver and load programs.

The term BOPS is the primary SPECjAppServer2001 metric and denotes the average number of successful Business Operations Per Second completed during the Measurement Interval. BOPS is composed of the total number of business transactions completed in the Customer Domain, added to the total number of workorders completed in the Manufacturing Domain, normalized per second.

The term Resource Manager is the software that manages a database and is the same as a Database Manager.

The following terms are defined in the Workload Description of the SPECjAppServer2001 Design Document:

2.2 Software Product Requirements

2.2.1 Component Availability

All software components required to run the benchmark in the System Under Test (SUT) must be implemented by products that are generally available, supported and documented (see section 3.4 for general availability rules). These include but are not limited to:

2.2.2 J2EE Compliance

The SUT must provide all application components with a runtime environment that meets the requirements of the Java 2 Platform, Enterprise Edition, (J2EE) Version 1.2 or Version 1.3 specification during the benchmark run. The SUT must meet the J2EE compatibility requirements and must be branded Java Compatible, Enterprise Edition.

Comment: A new version of a J2EE compatible product must have passed the J2EE Compatibility Test Suite (CTS) by the product's availability date. See section 3.4 for availability requirements.

2.2.3 Benchmark Kit

The class files provided in the driver.jar file of the SPECjAppServer2001 kit must be used as is. No source code recompilation is allowed.

The source files provided for the SPECjAppServer2001 Reference Beans that require isolation level READ_COMMITTED (see section 2.10.4.3) in the SPECjAppServer2001 kit must be used as is (i.e., no source code modification is allowed). The remaining SPECjAppServer2001 Reference Bean source files may be updated to support DBMS or JDBC differences. These updates must be documented (see section 2.5).

2.3 Scaling the Benchmark

The throughput of the SPECjAppServer2001 benchmark is driven by the activity of the OrderEntry and Manufacturing applications. The throughput of both applications is directly related to the chosen Injection Rate. To increase the throughput, the Injection Rate needs to be increased. The benchmark also requires a number of rows to be populated in the various tables. The scaling requirements are used to maintain the ratio between the ECtransaction load presented to the SUT, the cardinality of the tables accessed by the ECtransactions, the Injection Rate and the number of ECclients generating the load.

2.3.1 Scaling Requirements

Database scaling is defined by the Orders Injection Rate Ir. The scaling is done as a step function and does not cause the database to increase linearly in size.

The cardinality (the number of rows in the table) of the Site and Supplier tables is fixed. The cardinality of the customer related tables, namely, customer, orders, orderline, and workorder will increase as a step function C, which is defined as:

C = the smallest multiple of 10 >= Ir

For example, if Ir = 72 then C = 80.

The cardinality of the item related tables, namely, Parts, BOM, Inventory and Item will increase as a step function P, which is defined as:

P = smallest multiple of 100 >= Ir

For example, if Ir = 72 then P = 100.

2.3.2 Database Scaling Rules

The following scaling requirements represent the initial configuration of the tables in the various domains, where C and P are defined above.

TABLE 1 : Database Scaling Rules
Domain Table Name Cardinality (in rows) Comments
Corporate      
  C_Site 1  
  C_Supplier 10  
  C_Customer 75 * C  
  C_Rule 1  
  C_Discount 6  
  C_Parts (11 * P)1 P Assemblies + 10 * P Components
Orders      
  O_Customer 75 * C NUM_CUSTOMERS
  O_Item P NUM_ITEMS
  O_Orders 75 * C  
  O_Orderline (225 * C)1 Avg. of 3 per order
Manufacturing      
  M_Parts (11 * P)1  
  M_BOM (10 * P)1  
  M_Workorder P  
  M_Inventory (11 * P)1  
Supplier      
  S_Site 1  
  S_Supplier 10  
  S_Component (10 * P)1 Avg. of 10 components per assembly
  S_Supp_Component (100 * P)1  
  S_PurchaseOrder (.2 * P) 1 2% of components
  S_PurchaseOrderLine P1 Avg. of 5 per purchase order

1. These sizes may vary depending on actual random numbers generated.

2.3.3 Centralized vs. Distributed

To satisfy the requirements of a wide variety of customers, the SPECjAppServer2001 benchmark can be run in Centralized or Distributed mode. The SUT consists of one or more nodes, which number is freely chosen by the implementer. The entire number of databases and EJB Containers can be mapped to nodes as required. The implementation must not, however, take special advantage of the co-location of databases and EJB Containers, other than the inherent elimination of WAN/LAN traffic.

In the Centralized version of the workload, all the 4 domains are allowed to be combined. This means that the benchmark implementer can choose to run a single Deployment Unit that accesses a single database that contains the tables of all the domains. However, a benchmark implementer is free to separate the domains into their Deployment Units and still run a single database. There are no requirements for XA 2-phase commits in the Centralized workload.

The Distributed version of the workload is intended to model application performance where the world-wide enterprise that SPECjAppServer2001 models performs transactions across business domains employing heterogeneous resource managers. In this model, the workload requires a separate Deployment Unit and a separate DBMS instance for each domain. XA-compliant recoverable 2-phase commits (see The Open Group XA Specification: http://www.opengroup.org/public/pubs/catalog/c193.htm) are required in ECtransactions that span multiple domains. The configuration for this 2-phase commit is required to be done in a way that would support heterogeneous systems. Even though implementations are likely to use the same Resource Manager for all the domains, the EJB Servers/Containers and Resource Managers cannot take advantage of the knowledge of homogeneous Resource Managers to optimize the 2-phase commits.

2.3.4 Scaling the OrderEntry Application

To stress the ability of the Container to handle concurrent sessions, the benchmark requires a minimum number of ECclients equal to 5 * Ir where Ir is the chosen Injection Rate. The number doesn't change over the course of a benchmark run.

For each new order, the customer to use is defined in the SPECjAppServer2001 Design Document, section 3.7, where nCust = 100 * C. For example, if Ir = 100, the database is initially populated with 7500 customers (NUM_CUSTOMERS), and nCust = 10000.

2.3.5 Scaling the Manufacturing Application

The Manufacturing Application scales in a similar manner to the OrderEntry Application. Since the goal is just-in-time manufacturing, as the number of orders increase a corresponding increase in the rate at which widgets are manufactured is required. This is achieved by increasing the number of Planned Lines p proportionally to Ir as

p = 3 * Ir

Since the arrival of large orders automatically determines the LargeOrder Lines, nothing special needs to be done about these.

2.4 Database Requirements

All tables must have the properly scaled number of rows as defined by the database population requirements (see section 2.3).

Additional database objects or DDL modifications made to the reference schema scripts in the schema/sql directory in the SPECjAppServer2001 Kit must be disclosed along with the specific reason for the modifications. The base tables and indexes in the reference scripts cannot be replaced or deleted. Views are not allowed. The data types of fields can be modified provided they are semantically equivalent to the standard types specified in the scripts.

Comment 1: Replacing char with varchar would be considered semantically equivalent. Changing the size of a field (for example: increasing the size of a char field from 8 to 10) would not be considered semantically equivalent. Replacing char with integer (for example: zip code) would not be considered semantically equivalent.

Modifications that a customer may make for compatibility with a particular database server are allowed. Changes may also be necessary to allow the benchmark to run without the database becoming a bottleneck, subject to approval by SPEC. Examples of such changes include:

In any committed state the primary key values must be unique within each table. For example, in the case of a horizontally partitioned table, primary key values of rows across all partitions must be unique.

The databases must be populated using the load programs provided as part of the SPECjAppServer2001 kit. The load programs use standard SQL INSERT statements and loads all the tables via JDBC and so should work unchanged across all DBMS. However, modifications are permitted for porting purposes. All such modifications made must be disclosed in the Submission File.

2.5 Bean Deployment Requirements

The Test Sponsor must run the SPECjAppServer2001 Reference Beans. The SPECjAppServer2001 Reference Beans come in both CMP and BMP versions. The Sponsor can choose to deploy either CMP or BMP or a mix of both. See section 2.2 for Container requirements.

The only changes allowed to the SPECjAppServer2001 Reference Beans are in the classes that implement the bean-managed persistence (BMP). The only changes allowed to the BMP code are for porting changes, similar to section 2.4. All code modifications must appear in the Submission File (see section 2.10.4.3), along with an explanation for the changes.

The deployment descriptors supplied with the SPECjAppServer2001 Reference Beans must be used without any modifications. If deploying in CMP mode, all the finder methods implemented in the BMP code must be specified in the deployment descriptors with the same SQL semantics and implemented transparently by the Container.

Comment: Transparent implementation of the finder methods implies that the Container must generate the code for these methods automatically.

Commit Option A, specified in section 9.1.10 of the EJB 1.1 specification is not allowed. It is assumed that the database(s) could be modified by external applications.

Optimizations used to avoid ejbStore operations on entity beans are allowed only if the deployer does not need knowledge of the internal implementation of the SPECjAppServer2001 Reference Beans. If such optimizations are not transparent to the deployer, they must be disclosed.

Comment: The intent of this section is to encourage ejbStore optimizations to be done automatically by the container.

2.6 OrderEntry Driver Requirements

2.6.1 Transaction Mix Requirements

The OrderEntry Driver repeatedly performs business transactions in the Customer Domain. Business transactions are selected by the Driver based on the mix shown in Table 2. Since the benchmark is intended to test the transaction handling capabilities of EJB Containers, the mix is update intensive. In the real-world, there may be more readers than writers.

The actual mix achieved in the benchmark must be within 5% of the targeted mix for each type of transaction. For example, the newOrder transactions can vary between 47.5% to 52.5% of the total mix. The Driver checks and reports on whether the mix requirement was met.

TABLE 2 : Transaction Mix Requirements
Transaction Type Percent Mix
newOrder 50%
getOrderStatus 20%
changeOrder 20%
getCustStatus 10%
Transaction Mix Requirements for the OrderEntry Application

2.6.2 Response Time Requirements

The OrderEntry Driver measures and records the Response Time of the different types of business transactions. Only successfully completed business transactions in the Measurement Interval are included. At least 90% of the business transactions of each type must have a Response Time of less than the constraint specified in Table 3 below. The average Response Time of each transaction type must not be greater than 0.1 seconds more than the 90% Response Time. This requirement ensures that all users will see reasonable response times. For example, if the 90% Response Time of newOrder transactions is 1 second, then the average cannot be greater than 1.1 seconds. The Driver checks and reports on whether the response time requirements were met.

TABLE 3 : Response Time Requirements
Transaction Type 90% RT (in seconds)
newOrder 2
getOrderStatus 2
changeOrder 2
getCustStatus 2
Response Time Requirements for the OrderEntry Application

2.6.3 Cycle Time Requirements

For each business transaction, the OrderEntry Driver selects cycle times from a negative exponential distribution, computed from the following equation, such that the maximum average injection rate chosen can be achieved as best possible.

Tc = -ln(x) / Ir

where:

ln = natural log (base e)
x  = random number with at least 31 bits of precision, 
     from a uniform distribution such that (0 < x <= 1)
Ir = mean Injection Rate

The distribution is truncated at 5 times the mean. For each business transaction, the Driver measures the Response Time Tr and computes the Delay Time Td as Td = Tc - Tr. If Td > 0, the Driver will sleep for this time before beginning the next transaction. If the chosen cycle time Tc is smaller than Tr, then the actual cycle time (Ta) is larger than the chosen one. The average actual cycle time is allowed to deviate from the targeted one by 5%. The Driver checks and reports on whether the cycle time requirements were met.

2.6.4 Miscellaneous Requirements

The table below shows the range of values allowed for various quantities in the OrderEntry application. The Driver will check and report on whether these requirements were met.

TABLE 4 : Miscellaneous OrderEntry Requirements
Quantity Targeted Value Min. Allowed Max. Allowed
Widget Ordering Rate/sec 14.25 * Ir 13.54 * Ir 14.96 * Ir
LargeOrder Widget Ordering Rate/sec 7.5 * Ir 7.13 * Ir 7.88 * Ir
RegularOrder Widget Ordering Rate/sec 6.75 * Ir 6.41 * Ir 7.09 * Ir
% Large Orders 10 9.5 10.5
% Orders thru Cart 50 47.5 52.5
% ChgOrders that were delete 10 9.0 11.0

2.6.5 Performance Metric

The Metric for the Customer Domain is Transactions/sec, composed of the total count of all business transaction types successfully completed during the measurement interval divided by the length of the measurement interval in seconds.

2.7 Manufacturing Driver Requirements

2.7.1 Response Time Requirements

The Manufacturing Driver measures and records the time taken for a workorder to complete. Only successfully completed workorders in the Measurement Interval are included. At least 90% of the workorders must have a Response Time of less than 5 seconds. The average Response Time must not be greater than 0.1 seconds more than the 90% Response Time.

2.7.2 Miscellaneous Requirements

The table below shows the range of values allowed for various quantities in the Manufacturing Application. The Manufacturing Driver will check and report on whether the run meets these requirements.

TABLE 5 : Miscellaneous Manufacturing Requirements
Quantity Targeted Value Min. Allowed Max. Allowed
LargeOrderline Widget Rate/sec 6.75 * Ir 6.075 * Ir 7.425 * Ir
Planned Line Widget Rate/sec 6.75 * Ir 6.075 * Ir 7.425 * Ir

2.7.3 Performance Metric

The metric for the Manufacturing Domain is Workorders/sec, whether produced on the Planned lines or on the LargeOrder lines.

2.8 Driver Rules

The Driver is provided as part of the SPECjAppServer2001 kit. Sponsors are required to use this Driver to run the SPECjAppServer2001 benchmark.

The Driver communicates with the SUT using the RMI interface over a protocol supported by the EJB Container, such as RMI/JRMP, RMI/IIOP, RMI/T3, etc.

The Driver must reside on system(s) that are not part of the SUT.

Comment: The intent of this section is that the communication between the Driver and the SUT be accomplished over the network.

The Driver system(s) must use a single URL to establish communication with the Container in the case of the Centralized Workload and 4 URLs (one per Domain) in the case of the Distributed Workload.

EJB object stubs invoked by the Driver on the Driver system(s) are limited to data marshalling functions, load-balancing and failover capabilities. Pre-configured decisions, based on specific knowledge of SPECjAppServer2001 and/or the benchmark configuration are disallowed.

The Driver system(s) may not perform any processing ordinarily performed by the SUT, as defined in section 2.12. This includes, but is not limited to:

The Driver records all exceptions in error logs. The only expected errors are those related to transaction consistency when a transaction may occasionally rollback due to conflicts. Any other errors that appear in the logs must be explained in the Submission File.

2.9 Measurement Interval Requirements

The Orders and Manufacturing Application must be started simultaneously at the start of a benchmark run. The Measurement Interval must be preceded by a ramp-up period of at least 10 minutes at the end of which a steady state throughput level must be reached. At the end of the Measurement Interval, the steady state throughput level must be maintained for at least 5 minutes, after which the run can terminate.

2.9.1 Steady State

The reported metric must be computed over a Measurement Interval during which the throughput level is in a steady state condition that represents the true sustainable performance of the SUT. Each Measurement Interval must be at least 30 minutes long and should be representative of an 8 hour run.

Comment: The intent is that any periodic fluctuations in the throughput or any cyclical activities, e.g. JVM garbage collection, database checkpoints, etc. be included as part of the Measurement Interval.

2.9.2 Reproducibility

To demonstrate the reproducibility of the steady state condition during the Measurement Interval, a minimum of one additional (and non-overlapping) Measurement Interval of the same duration as the reported Measurement Interval must be measured and its BOPS must be greater than the reported BOPS. This reproducibility run's metric is required to be within 5% of the reported BOPS.

2.10 Transaction Property Requirements

The Atomicity, Consistency, Isolation and Durability (ACID) properties of transaction processing systems must be supported by the system under test during the running of this benchmark.

2.10.1 Atomicity Requirements

The system under test must guarantee that database transactions are atomic; the system will either perform all individual operations on the data, or will assure that no partially-completed operations leave any effects on the data.

2.10.2 Atomicity Tests for the Centralized Workload

2.10.2.1 Atomicity Test 1

a. Choose a customer who has a bad credit by looking in the C_CUSTOMER table for a customer with the C_CREDIT field equal to `BC'.

b. Modify the debug level in the OrderEnt bean deployment to 4 so the code will print the order ID as soon as it generates it.

c. Enter a new order for this customer using the web client application, distributed as part of the SPECjAppServer2001 Kit. Note the order ID printed by the bean code. The transaction should fail generating an InsufficientCreditException.

d. Retrieve the status of the noted order ID in step c. The order should not exist.

e. Query the database table O_ORDERLINE for rows where OL_O_ID match the order ID printed in step c. There should be no rows returned.

2.10.2.2 Atomicity Test 2

a. Choose a customer with good credit by looking in the C_CUSTOMER table for a customer with the C_CREDIT field equal to `GC'.

b. Enter a new order for this customer using the web client application. The transaction should succeed. Note the order ID returned.

c. Retrieve the status of the noted order ID above. The order along with the orderlines entered in step b. should be displayed.

2.10.3 Atomicity Tests for the Distributed Workload

In addition to performing atomicity test 1 and 2 above, a third test must be performed as described below.

2.10.3.1 Atomicity Test 3

a. Modify the debug level in the OrderEnt bean deployment to 4 so the code will print the order ID and orderLine IDs as soon as it generates it.

b. Do the same for the LargeOrderEnt bean in the Manufacturing Domain.

c. Change the LargeOrderEnt bean in the Manufacturing Domain to add the following to the ejbStore method: entityContext.setRollBackOnly();

d. Enter a new order for any customer, ensuring that it is a largeorder. Note the values of the order ID, orderLine IDs and largeOrder ID displayed.

e. The transaction should rollback. Verify that the rows in the O_orders, O_orderline and M_largeorder table with the IDs retrieved in the above step do not exist.

2.10.4 Consistency Requirements

This section describes the transaction isolation and consistency requirements. One can choose to implement the requirements in this section by any mechanism supported by the Container.

2.10.4.1 Database Isolation

All ECtransactions must take the database from one consistent state to another. The various isolation levels are described in ANSI SQL and J2SE documentation for java.sql.Connection. The isolation levels are also described in the TPC-C specification available from http://www.tpc.org. All database transactions must have an isolation level of READ_COMMITTED or higher; i.e. dirty reads are not allowed.

2.10.4.2 Logical Isolation Definitions

For the purposes of specifying consistency, we use the following logical isolation levels:

These isolation levels are semantically equivalent to the ANSI SQL isolation levels but are defined on a per-entity basis. The logical isolation levels do not imply the use of the corresponding database isolation level. For example, it is possible to use the READ_COMMITTED database isolation level and optimistic techniques such as verified finders, reads, updates and deletes, or pessimistic locking using SELECT FOR UPDATE type semantics to implement these logical isolation levels.

Comment 1: If an entity is deployed with a logical isolation of REPEATABLE_READ (or higher), it must be ensured that in any transaction where this entity is read, updated or deleted, the transaction will never be committed if the entity was updated or deleted in the database (by another committed transaction) since it was first read by the transaction. Note that optimizations to avoid database updates to entities that have not been changed in a given transaction are not valid if the suppression of updates results in an effective isolation level lower than REPEATABLE_READ. Additionally, if the container pre-loads entity state while executing finder methods (to avoid re-selecting the data at ejbLoad time), the mechanism normally used to ensure REPEATABLE_READ must still be effective, unless another mechanism is provided to ensure REPEATABLE_READ in this case. For example, if SELECT FOR UPDATE would normally be used at ejbLoad time, then SELECT FOR UPDATE should be used when executing those finder methods which pre-load entity state.

Comment 2: If database isolation level is used to implement the logical isolation level, it should be set to the highest logical isolation level of all the entities participating in the transaction. See Appendix A for a description of the entities accessed in each of the transactions.

Comment 3: If an entity is deployed with a logical isolation of READ_COMMITTED, and if that entity is not changed in a given transaction, then the container must not issue a database update that would have the effect of losing any external updates that are applied while the transaction is executing. If the container does not have the ability to suppress unnecessary updates that could interfere with external updates, then all entities must be deployed using the REPEATABLE_READ isolation level (or higher).

2.10.4.3 Logical Isolation Requirements

In all cases where a logical isolation level is specified, this is the minimum required. Use of a higher logical isolation level is permitted.

The following entities are infrequently updated with no concurrent updates and can be configured to run with a logical isolation level of READ_COMMITTED:

All other entities must run with a logical isolation level of REPEATABLE_READ.

Comment: In order to preserve referential integrity between OrderEnt and OrderLineEnt, all access to order lines within a given transaction is preceded by access to the corresponding order.

The method used to achieve the requirements in this section must be disclosed.

Comment 1: The BMP implementation of the SPECjAppServer2001 Reference Beans uses optimistic techniques for all entities that must be run with the REPEATABLE_READ isolation level.
Comment 2: Transaction rollbacks caused by conflicts when using concurrency control techniques are permitted.

2.10.5 Durability Requirements

Transactions must be durable from any single point of failure on the SUT. In particular, distributed 2-Phase Commit transactions must be durable. Durability implies that all committed transactions before the failure must be recoverable.

Comment: Durability from a single point of failure can be achieved by ensuring that there is a backup device (disk or tape) for the database and the logs can withstand a single point of failure. This is typically implemented by mirroring the logs onto a separate set of disks.

2.11 Supplier Emulator Rules

The Supplier Emulator is provided as part of the SPECjAppServer2001 Kit and can be deployed on any Web Server that supports Servlets 2.1.

The Supplier Emulator must reside on system(s) that are not part of the SUT. The Supplier Emulator may reside on one of the Driver systems.

Comment: The intent of this section is that the communication between the Supplier Emulator and the SUT be accomplished over the network.

2.12 System Under Test (SUT) Requirements

The SUT comprises all components which are being tested. This includes network connections, Application Servers/Containers, Database Servers, etc.

2.12.1 SUT Components

The SUT consists of:

Comment 1: Any components which are required to form the physical TCP/IP connections (commonly known as the NIC, Network Interface Card) from the host system(s) to the client machines are considered part of the SUT.
Comment 2: A basic configuration consisting of one or more switches between the Driver and the SUT is not considered part of the SUT. However, if any software/hardware is used to influence the flow of traffic beyond basic IP routing and switching, it is considered part of the SUT. For example, if DNS Round Robin is used to implement load balancing the DNS server is considered part of the SUT and so can not run on a driver client.

2.12.2 Database Services

The SUT services remote method calls from the Driver and returns results generated by the SPECjAppServer2001 Reference Beans which may involve information retrieval from a RDBMS. The database must be accessed only from the SPECjAppServer2001 Reference Beans (or the Container acting on behalf of a bean), using JDBC.

The SUT must not perform any caching operations beyond those normally performed by the servers (EJB Containers, Database Servers etc.) which are being used.

Comment: The intention is to allow EJB Container and Database Server caching to work normally but not to allow the implementation to take advantage of the limited nature of the benchmark and to cache information which would normally be retrieved from the Servers.

Any software that is required to build and deploy the SPECjAppServer2001 Reference Beans is considered part of the SUT.

2.12.3 Storage

The SUT must have sufficient on-line disk storage to support any expanding system files and the durable database population resulting from executing the SPECjAppServer2001 transaction mix for 8 hours at the reported BOPS.


Section 3 - Reporting Results

3.1 SPECjAppServer2001 Performance Metric

The primary metric of the SPECjAppServer2001 benchmark is Business Operations Per Second (BOPS).

The overall metric for the SPECjAppServer2001 benchmark is calculated by adding the metrics of the OrderEntry Application in the Customer Domain and the Manufacturing Application in the Manufacturing Domain as:

BOPS = Transactions/sec + Workorders/sec

All reported BOPS must be measured, rather than estimated, and expressed to exactly two decimal places, rounded to the hundredth place.

The performance metric must be reported with the category of the SUT that was used to generate the result (i.e., @SingleNode, @DualNode, @MultipleNode, @Distributed). See section 3.5 for a description of the categories. For example, if a measurement yielded 123.45 BOPS on a Single Node, this must be reported as 123.45 BOPS@SingleNode.

3.2 Required Reporting

The frequency distribution of the Response Times of all business transactions in the Customer and Manufacturing Domains, started and completed during the Measurement Interval must be reported in a graphical format. In each graph, the x-axis represents the Response Time and must range from 0 to five times the required 90th percentile Response Time (N). This 0 to 5N range must be divided into 100 equal length intervals. One additional interval will include the Response Time range from 5N to infinity. All 101 intervals must be reported. The y-axis represents the frequency of each type of business transaction at a given Response Time range with a granularity of at least 10 intervals. An example of such a graph is shown below.

sample response time graph
FIGURE 1: Sample Response Time Graph

A graph of the workorder throughput versus flashed time (i.e., wall clock time) must be reported for the Manufacturing Application for the entire Test Run. The x-axis represents the elapsed time from the start of the run. The y-axis represents the throughput in business transactions. At least 60 different intervals must be used with a maximum interval size of 30 seconds. The opening and the closing of the Measurement Interval must also be reported. An example of such a graph is shown below.

sample throughput graph
FIGURE 2: Sample Throughput Graph

3.3 Benchmark Optimization Rules

Benchmark specific optimization is not allowed. Any optimization of either the configuration or products used on the SUT must improve performance for a larger class of workloads than that defined by this benchmark and must be supported and recommended by the provider. Optimizations must have the vendor's endorsement and should be suitable for production use in an environment comparable to the one represented by the benchmark. Optimizations that take advantage of the benchmark's specific features are forbidden. Examples of inappropriate optimization include, but are not limited to, taking advantage of the specific SQL code used, the sizes of various fields or tables, or the number of beans deployed in the benchmark.

3.4 General Availability

All hardware and software used must be orderable by customers. For any product not already generally released, the Submission File must include a committed general delivery date. That date must not exceed 3 months beyond the Full Disclosure submittal date. However, if Java and/or J2EE related licensing issues cause a change in software availability date after publication date, the change will be allowed to be made without penalty, subject to subcommittee review.

All products used must be the proposed final versions and not prototypes. When the product is finally released, the product performance must not decrease by more than 2% of the published BOPS. If the submitter later finds the performance of the released system to be 2% lower than that reported for the pre-release system, then the submitter is requested to report a corrected test result.

Comment 1: The intent is to test products that customers will use, not prototypes. Beta versions of products can be used, provided that General Availability (GA) is within 3 months.

Comment 2: The 2% degradation limit only applies to a difference in performance between the tested product and the GA product. Subsequent GA releases (to fix bugs, etc.) are not subject to this restriction.

3.5 Categorization of Results

This section describes how the SPECjAppServer2001 results are categorized. Any given configuration on which results are reported will belong to a single category. In the event of ambiguity as to which category a particular result belongs to, the SPEC Java Review committee will determine the category that most closely reflects the intent of the rules in this section.

Comparison across different categories is a violation of SPEC Fair Use Rules, see section 3.7.3.

3.5.1 Definition of Node

Categories are defined in terms of 'Nodes'. A Node is defined as a system running a single OS image.

3.5.2 Definition of Coherent Memory System

System CPUs running a coherent memory system can read or write to the same memory subsystem as other processors in the configuration.

3.5.3 Centralized Workload Categories

The Centralized workload is defined in section 2.3.3. Results reported using this workload fall into one of the three categories as defined below:

3.5.3.1 Single Node System

Configurations in which a single Node with a coherent memory system running both the Application Server and Database instances fall into the Single Node System category. Distributed operating system clusters are not allowed in this category.

3.5.3.2 Dual Node System

Configurations in which two Nodes with coherent memory systems, one running the Application Server and the other running the Database fall into the Dual Node System category. Distributed operating system clusters are not allowed in this category.

3.5.3.3 Multiple Node System

Configurations containing three or more Nodes fall into the Multiple Node System category. Also, systems running a distributed operating system cluster or non-coherent memory system fall into this category.

3.5.4 Distributed Workload Category

The Distributed workload is defined in section 2.3.3. All distributed workload configurations fall into a 'Distributed System' category.

3.6 Result Disclosure and Submission

In order to publicly disclose SPECjAppServer2001 results, the submitter must adhere to these reporting rules in addition to having followed the run rules described in this document. The goal of the reporting rules is to ensure the system under test is sufficiently documented such that someone could reproduce the test and its results.

Compliant runs need to be submitted to SPEC for review and approval prior to public disclosure. Submissions must include the Submission File, a Configuration Diagram, and the Full Disclosure Archive for the run (see section 5.1). See section 5.3 of the SPECjAppServer2001 User Guide for details on submitting results to SPEC.

Test results that have not been approved and published by SPEC must not use the SPECjAppServer metrics (BOPS and Price/BOPS) in public disclosures.

3.7 Result Usage

SPECjAppServer2001 results must always be quoted using the performance metric, the price/performance metric, and the category in which the results were generated.

3.7.1 Estimates

Estimates are not allowed.

3.7.2 Comparison to Other Benchmarks

SPECjAppServer2001 results must not be publicly compared to results from any other benchmark. This would be a violation of the SPECjAppServer2001 Run and Reporting Rules and, in the case of the TPC benchmarks, a serious violation of the TPC "fair use policy."

Results between different categories (see section 3.5) within SPECjAppServer2001 may not be compared; any attempt to do so will be considered a violation of SPEC Fair Use Rules.

3.7.3 Fair Use

Performance comparisons may be based only upon the SPEC defined metrics (SPECjAppServer2001 BOPS@Category or Price/BOPS@Category). Other information from the result page may be used to differentiate systems, i.e. used to define a basis for comparing a subset of systems based on some attribute like number of CPU's or memory size.

Conversions of the Price/BOPS@Category to other currencies for purpose of competitive comparison when intended for public view or use is strictly prohibited.

When competitive comparisons are made using SPECjAppServer2001 benchmark results, SPEC expects that the following template be used:

SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org as of (date). [The comparison presented is based on (basis for comparison).] For the latest SPECjAppServer2001 results visit http://www.spec.org/jAppServer2001.

(Note: [...] above required only if selective comparisons are used.)

Example:

SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org as of August 12, 2002. The comparison presented is based on best performing 4-CPU servers currently shipping by Vendor 1, Vendor 2 and Vendor 3. For the latest SPECjAppServer2001 results visit http://www.spec.org/jAppServer2001.

The rationale for the template is to provide fair comparisons, by ensuring that:

3.8 Research and Academic Usage

SPEC encourages use of the SPECjAppServer2001 benchmark in academic and research environments. The researcher is responsible for compliance with the terms of any underlying licenses (Application Server, DB Server, hardware, etc.).

3.8.1 Restrictions

It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of licensees submitting to the SPEC web site. SPEC encourages researchers to obey as many of the run rules as practical, even for informal research. If research results are being published, SPEC requires:

SPEC reserves the right to ask for a full disclosure of any published results.

3.8.2 Disclosure

Public use of SPECjAppServer benchmark results are bound by the SPEC OSSC Fair Use Guidelines and the SPECjAppServer specific Run and Reporting Rules (this document). All publications must clearly state that these results have not been reviewed or approved by SPEC using text equivalent to this:

SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). The SPECjAppServer results or findings in this publication have not been reviewed or approved by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result. The official web site for SPECjAppServer2001 is located at http://www.spec.org/osg/jAppServer2001.

This disclosure must precede any results quoted from the tests. It must be displayed in the same font as the results being quoted.


Section 4 - Pricing

4.1 Price/Performance Metric

In addition to the performance metric defined in section 3.1, SPECjAppServer2001 includes a price/performance metric defined as Price/BOPS which is the total price of the SUT in the local currency divided by the reported BOPS.

The price/performance metric is rounded up (ceiling function) to the next available currency unit such that fractions of the lowest denomination of local currency are not allowed (e.g.,in the US it would be cents and fractions of a cent are not to be included in the metric). For example, if the total price is US$ 5,734,417 and the reported throughput is 105.12 BOPS, then the price/performance is US$ 54,551.16/BOPS (54,551.151 rounded up). The SPECjAppServer2001 reporter uses the total SUT cost and the BOPS to compute the metric for the report page.

In addition, for pricing challenge procedures see Appendix C.

4.2 Priced Components

4.2.1 SUT

The entire price of the SUT (see section 2.12) must be included, including all hardware, software, and support for a three year period.

All hardware components priced must be new and not reconditioned or previously owned, and the price must be the purchase price (no leasing). The software may use term limited pricing (i.e., software leasing), provided there are no additional conditions associated with the term limited pricing. If term limited pricing is used, the price must be for a minimum of three years. The three year support period must cover both hardware maintenance and software support.

The number of users for SPECjAppServer2001 is 8 * Ir (where 5 * Ir are Internet users and 3 * Ir are Intranet users). Any usage pricing for the above number of users should be based on the pricing policy of the company supplying the priced component.

4.2.2 Additional Components

Additional components such as operator consoles and backup devices must also be priced, if explicitly required for the operation, administration, or maintenance, of the SUT.

If software needs to be loaded from a particular device either during installation or for updates, the device must be priced.

4.2.3 Support

Hardware maintenance and software support must be priced for 7 days/week, 24 hours/day coverage, either on-site, or if available as standard offering, via a central support facility.

If a central support facility is priced, then all hardware and software required to connect to the central support must be installed and functional on the SUT during the measurement run and priced.

The response time for hardware maintenance requests must not exceed 4 hours on any component whose replacement is necessary for the SUT to return to the tested configuration.

Pricing of spares in lieu of the hardware maintenance requirements is allowed if the part to be replaced can be identified as having failed by the customer within 4 hours. An additional 10% of the designated part, with a minimum of 2, must be priced. A support service for the spares which provides replacement on-site within 7 days must also be priced for the support period.

Software support requests must include problem acknowledgement within 4 hours. No additional charges will be incurred for the resolution of software defects. Problem resolution for more than 10 non-defect problems per year is permitted to incur additional charges. Software support must include all available maintenance updates over the support period.

4.3 Pricing Rules

The intent of the pricing rules is to price the tested system at the full price a customer would pay. Assumptions of other purchases made by this customer are not allowed. This is a one time, stand-alone purchase. For ease of benchmarking, the priced system may include hardware components that are different from the tested system, as long as the substituted components perform equivalently or better in the benchmark. Any substitutions must be disclosed in the price sheet. For example, disk drives with lower capacity or speed in the tested system can be replaced by faster ones in the priced configuration. However, it is not permissible to replace key components such as CPU, memory or any software.

4.3.1 General Availability

See section 3.4.

4.3.2 Discounts

All pricing used must be generally available (list) pricing. No form of special purpose discounts or promotional pricing is allowed.

It is permissible to use package pricing as long as this package is a standard offering. The entire package must be priced in the SUT.

Comment: The intent is to include a price that is available to any customer without the need to request a lower price.

4.3.3 Currency

All pricing should be in the currency unit which a customer in their country would use to pay for the system and reflect local retail pricing. Prices should be rounded up (ceiling function) to the least significant digit of the currency unit being used. For example, all U.S. pricing should be rounded up to the next cent.

4.3.4 Sources and Effective Dates

All pricing sources and the effective date of the prices must be disclosed. For currently available components, the effective date is the submission date. For components not yet available, the price reported must be the initial General Availability (GA) price and the effective date is the GA date. Changes in component prices would require a resubmission (published results can not be updated).

Pricing must be guaranteed for 60 days from the later date of the component's GA or the result's publication. In addition revised pricing data must be provided to SPEC for the first 90 days should any components incur a price increase and alter price/performance standings (see section 5.6.6).

To facilitate the evaluation of pricing in SPEC benchmarks, the submitter agrees to provide a current pricing disclosure on March 14, 2003 to the Open Systems Steering Committee (email info@spec.org for contact info) if the price has increased by 2% or more. Otherwise on that date, the submitter need only send email to the OSSC stating that the current pricing is still valid. If a performance critical component is no longer available so that the pricing can not be reconfirmed, that will be conveyed to the OSSC.

Comment: Typographical changes necessary during review would not require a resubmission.

4.3.5 Level of Detail

All items supplied by a third party (i.e. not the Test Submitter) must be explicitly stated. Each third party supplier's items and prices, must be listed separately.

Pricing shown in the Price Sheet must reflect the level of detail a customer would see on an itemized billing.


Section 5 - Full Disclosure

A Full Disclosure is required in order for results to be considered compliant with the SPECjAppServer2001 benchmark specification.

Comment 1: The intent of this disclosure is to be able to replicate the results of a submission of this benchmark given the equivalent hardware, software, and documentation.

Comment 2: In the sections below, when there is no specific reference to where the disclosure must occur, it must occur in the Submission File. Disclosures in the Archive are explicitly called out.

5.1 Definition of Terms

The term Full Disclosure refers to the information that must be provided when a benchmark result is reported.

The term Configuration Diagram refers to the picture in a common graphics format that depicts the configuration of the SUT. The Configuration Diagram is part of a Full Disclosure.

The term Full Disclosure Archive (or "Archive" for short) refers to the soft-copy archive of files that is part of a Full Disclosure.

The term Submission File refers to the ASCII file that contains the information specified in this section, to which the "result.props" file from the run must be appended. The Submission File is part of a Full Disclosure.

The term Benchmark Results Page refers to the report in HTML or ASCII format that is generated from the Submission File. The Benchmark Results Page is the format used when displaying results on the SPEC web site. The Benchmark Results Page in HTML format will provide a link to the Configuration Diagram and the Full Disclosure Archive.

5.2 Configuration Diagram

A Configuration Diagram of the entire configuration (including the SUT, Supplier Emulator, and load drivers) must be provided in PNG, JPEG or GIF format. The diagram should include, but is not limited to:

5.3 Full Disclosure Archive

The Full Disclosure Archive contains the following items:

The Archive must be in ZIP, TAR or JAR format.

5.4 Software Configuration

5.4.1 Software Products

All commercially available software products used must be identified in the Submission File. Settings, along with a brief description must be provided for all customer-tunable parameters and options which have been changed from the defaults found in actual products, including but not limited to:

5.4.1.1 Instances of the EJB Container/JVMs

The number of instances of the EJB Container and/or the number of JVMs used must be disclosed in the system.sw.EJB_Container.instances section of the Submission File.

5.4.1.2 Argument Passing Semantics

The method by which adherence to section 18.2.3 of the EJB 1.1 specification ("Argument passing semantics") is assured must be disclosed in the benchmark.argument_passing_semantics section of the Submission File.

5.4.2 Benchmark Version

The version number of the SPECjAppServer2001 Kit used to run the benchmark must be included in the Submission File. The version number is written to the result.props file with the configuration and result information.

5.4.3 Date J2EE Server Passed CTS

The date that the J2EE Compatible Product passed (or is expected to pass) the J2EE Compatibility Test Suite (CTS) must be disclosed in the system.sw.EJB_Container.date_passed_CTS section of the Submission File.

5.4.4 Load Orders Injection Rate

The Orders Injection Rate used to load the database(s) must be disclosed in the benchmark.load.injection_rate section of the Submission File.

5.4.5 Database Setup

The Full Disclosure Archive must include all table definition statements and all other statements used to set-up the database. The scripts used to create the database should be included in the Full Disclosure Archive under the "Schema" sub-directory.

5.4.6 Schema Modifications

If the schema was changed from the reference one provided in the Kit (see section 2.4), the reason for the modifications must be disclosed in the benchmark.schema_modifications section of the Submission File.

5.4.7 Load Program Modifications

If the Load Programs in the SPECjAppServer2001 kit were modified (see section 2.4), all such modifications must be disclosed in the benchmark.load_program_modifications section of the Submission File and the modified programs must be included in the Full Disclosure Archive.

5.4.8 Database Logical Volumes

All scripts/programs used to create any logical volumes for the database devices must be included as part of the Full Disclosure Archive. The distribution of tables and logs across all media must be explicitly depicted.

5.4.9 Bean Persistence

The type of persistence, whether CMP, BMP or mixed mode used by the EJB Containers must be disclosed in the benchmark.persistence_mode_used section of the Submission File. If mixed mode is used, the list of beans deployed using CMP and BMP must be enumerated.

5.4.10 Reference Bean Modifications

If the SPECjAppServer2001 Reference Beans were modified (see section 2.5), a statement describing the modifications must appear in the benchmark.reference_bean_modifications section of the Submission File and the modified code must be included in the Archive.

5.4.11 Deployment Descriptors

All Deployment Descriptors used must be included in the Full Disclosure Archive under the "Deploy" sub-directory. Any vendor-specific tools, flags or properties used to perform ejbStore optimizations that are not transparent to the user must be disclosed (see section 2.5) in the system.sw.EJB_Container.tuning section of the Submission File.

5.4.12 Reproducibility Run BOPS

The BOPS from the reproducibility run must be disclosed (see section 2.9.2) in the result.reproducibility_run.bops section of the Submission File. The entire output directory from the reproducibility run must be included in the Full Disclosure Archive in a directory named "RepeatRun".

5.4.13 Response Times Frequency Distribution Graph

A graph, in PNG, JPEG or GIF format, of the frequency distribution of response times for all the transactions (see section 3.2) must be included in the Full Disclosure Archive.

5.4.14 WorkOrder Throughput Graph

A graph, in PNG, JPEG or GIF format, of the workorder throughput versus elapsed time (see section 3.2) must be included in the Full Disclosure Archive.

5.4.15 Atomicity Test Scripts

The scripts/programs used to run the Atomicity tests and their outputs must be included in the Full Disclosure Archive in a directory named "Atomicity".

5.4.16 Isolation Requirements

The method used to meet the isolation requirements in section 2.10.4 must be disclosed in the benchmark.isolation_requirement_info section of the Submission File.

5.4.17 Durability Requirements

The method used to meet the durability requirements in section 2.10.5 must be disclosed in the benchmark.durability_requirement_info section of the Submission File.

5.4.18 Reference Bean Deployment

All steps used to build and deploy the SPECjAppServer2001 Reference Beans must be disclosed in a file called "deployCmds.txt" within the "Deploy" sub-directory of the Full Disclosure Archive.

5.4.19 xerces.jar

If the xerces.jar package in the jars sub-directory of the SPECjAppServer2001 Kit was not used, the reason for this should be disclosed in the benchmark.other section of the Submission File. The version and source of the actual package used should also be disclosed.

5.5 SUT and Driver

5.5.1 Network Optimization

If any software/hardware is used to influence the flow of network traffic beyond basic IP routing and switching, the additional software/hardware and settings (see section 2.12) must be disclosed in the benchmark.other section of the Submission File.

5.5.2 Driver Input Parameters

The input parameters to the Driver must be disclosed by including the following files used to run the benchmark in the Full Disclosure Archive:

If the Launcher package was modified, its source must be included in the Full Disclosure Archive.

5.5.3 Network Bandwidth

The bandwidth of the network(s) used in the tested configuration must be disclosed in the benchmark.other section of the Submission File.

5.5.4 Network Protocol

The protocol used by the Driver to communicate with the SUT (e.g., RMI/IIOP) must be disclosed in the system.sw.EJB_Container.protocol section of the Submission File.

5.5.5 Load Balancing

The hardware and software used to perform load balancing must be disclosed in the benchmark.other section of the Submission File. If the driver systems perform any load-balancing functions as defined in section 2.8, the details of these functions must also be disclosed.

5.5.6 Driver System(s) Description

The number and types of driver systems used, along with the number and types of processors, memory and network configuration must be disclosed in the system.hw section of the Submission File.

5.5.7 Driver JDK Version

The version of the JDK used on the Driver system(s) must be disclosed in the system.hw.notes section of the Submission File.

5.5.8 Errors in the Driver Error Logs

Any errors that appear in the Driver error logs must be explained in the notes section of the Submission File.

5.5.9 Storage Requirements

The method used to meet the storage requirements of section 2.12.3 must be disclosed in the benchmark.storage_requirement_info section of the Submission File.

5.6 Price Sheet

A detailed list of hardware and software used in the priced system must be reported in the Price Sheet along with a Statement of Responsibility for those prices. Each separately orderable item must have vendor part number (or unique identifier), description, release/revision level, and either general availability status or committed delivery date. If package-pricing is used, the unique identifier of the package and a description identifying each of the components of the package must be disclosed. Pricing sources and the effective date of the prices must also be reported.

5.6.1 Hardware and Software

The total price of the entire configuration must be reported, including: hardware, software, and maintenance charges. Separate component pricing is recommended.

5.6.2 General Availability

The committed delivery date for general availability (availability date) of products used in the price calculations must be reported. When the priced system includes products with different availability dates, the reported availability date for the priced system must be the date at which all components are committed to be available.

5.6.3 Usage Level Pricing

For any usage pricing, the sponsor must disclose:

Comment: Usage pricing may include, but is not limited to, the operating system, EJB server and database server software.

5.6.4 Subtotals

System pricing should include subtotals for the following components: Server Hardware, Server Software, and Network Components used.

5.6.5 Third Party Prices

System pricing must include line item indication where non-submitting companies' brands are used. System pricing must also include line item indication of third party pricing.

5.6.6 Statement of Responsibility

The quoted statement below needs to be added to the disclosure:

"The following submitter: [name of submitter] guarantees the accuracy of the supplied pricing data for a minimum of 60 days from [date (later of general availablity or publication dates)], and accepts the responsiblity to provide revised pricing data to SPEC during the first 90 days should the price increase by 2% or more."

Comment: Typographical changes necessary during review would not require a resubmission.


Appendix A - SPECjAppServer2001 Transactions

This appendix lists all of the ECtransactions that begin a new transaction and identify the entities that are accessed in each transaction. This information can be used to implement the isolation and consistency requirements in section 2.10.4. In the case of any discrepancy between this section and the actual code in the SPECjAppServer2001 Kit, the transaction behavior in the Kit prevails.

A.1 Transaction Definitions

The following grammar defines a declarative assertion language that is used to define the transactions in this section:

transaction ::= transaction method-name
[ calls finder-list ]
[ reads entity-list ]
[ creates entity-list ]
[ updates entity-list ]
[ deletes entity-list ]
end transaction

method-name ::= ejb-jar-display-name . ejb-name . method-suffix
entity-name ::= ejb-jar-display-name . ejb-name
finder-name ::= entity-name . method-suffix
finder-list ::= finder-name [, finder-name ]*
entity-list ::= entity-name [, entity-name ]*

A.2 Transaction Semantics

transaction method-name

Specifies a session bean or entity bean method that is used to initiate a transaction. The method either has the RequiresNew transaction attribute, or it has the Required transaction attribute and is sometimes (or always) invoked by its callers with no transaction context.

The method name may include a suffix for the purpose of differentiating overloaded methods.

calls finder-list

The calls clause specifies one or more entity bean finder methods that may be called by the transaction.

reads entity-list

The reads clause specifies one or more entities that may be read by the transaction. If an entity does not have a reads clause, it will definitely not be read by the transaction. The reads clause corresponds to the ejbLoad entity bean callback method.

creates entity-list

The creates clause specifies one or more entities that may be created by the transaction. If an entity does not have a creates clause, it will definitely not be created by the transaction. The creates clause corresponds to the ejbCreate entity bean methods.

updates entity-list

The updates clause specifies one or more entities that may be updated by the transaction. If an entity does not have a updates clause, it will definitely not be updated by the transaction.

The updates clause corresponds to the ejbStore entity bean callback method. The EJB 1.1 specification requires that if ejbCreate is called in a transaction to create an entity, that ejbStore must also be called by the container before transaction completion. The updates clause will however be omitted if the entity's cmp-fields are not modified outside of ejbCreate. It is valid for the container to make use of this information to avoid unnecessary database updates, although ejbStore must still be called on the bean class.

deletes entity-list

The deletes clause specifies one or more entities that may be deleted by the transaction. If an entity does not have a deletes clause, it will definitely not be deleted by the transaction. The deletes clause corresponds to the ejbRemove entity bean callback method.

A.3 List of Transactions

transaction Mfg.LargeOrderSes.findLargeOrders
    calls 
          Mfg.LargeOrderEnt.ejbFindAll
    reads 
          Mfg.LargeOrderEnt
    end transaction
transaction Mfg.WorkOrderSes.completeWorkOrder
    calls 
          Mfg.AssemblyEnt.ejbFindByPrimaryKey,
          Mfg.InventoryEnt.ejbFindByPrimaryKey,
          Mfg.WorkOrderEnt.ejbFindByPrimaryKey
    reads 
          Mfg.AssemblyEnt,
          Mfg.InventoryEnt,
          Mfg.WorkOrderEnt
    updates
          Mfg.InventoryEnt,
          Mfg.WorkOrderEnt
    end transaction
transaction Mfg.WorkOrderSes.scheduleWorkOrder__String__int__Date
    calls 
          Mfg.AssemblyEnt.ejbFindByPrimaryKey,
          Mfg.BomEnt.ejbFindBomForAssembly,
          Mfg.ComponentEnt.ejbFindByPrimaryKey,
          Mfg.InventoryEnt.ejbFindByPrimaryKey,
          Supplier.POLineEnt.ejbFindByPO,
          Supplier.POLineEnt.ejbFindByPrimaryKey,
          Supplier.SComponentEnt.ejbFindByPrimaryKey,
          Supplier.SupplierCompEnt.ejbFindByPrimaryKey,
          Supplier.SupplierEnt.ejbFindAll
    reads
          Mfg.AssemblyEnt,
          Mfg.BomEnt,
          Mfg.ComponentEnt,
          Mfg.InventoryEnt,
          Mfg.WorkOrderEnt,
          Supplier.POEnt,
          Supplier.POLineEnt,
          Supplier.SComponentEnt,
          Supplier.SupplierCompEnt,
          Supplier.SupplierEnt
    creates
          Mfg.WorkOrderEnt,
          Supplier.POEnt,
          Supplier.POLineEnt
    updates
          Mfg.InventoryEnt,
          Mfg.WorkOrderEnt,
          Supplier.SComponentEnt
    end transaction
transaction Mfg.WorkOrSes.scheduleWorkOrder__int__int__String__int__Date
    calls
          Mfg.AssemblyEnt.ejbFindByPrimaryKey,
          Mfg.BomEnt.ejbFindBomForAssembly,
          Mfg.ComponentEnt.ejbFindByPrimaryKey,
          Mfg.InventoryEnt.ejbFindByPrimaryKey,
          Mfg.LargeOrderEnt.ejbFindByOrderLine,
          Supplier.POLineEnt.ejbFindByPO,
          Supplier.POLineEnt.ejbFindByPrimaryKey,
          Supplier.SComponentEnt.ejbFindByPrimaryKey,
          Supplier.SupplierCompEnt.ejbFindByPrimaryKey,
          Supplier.SupplierEnt.ejbFindAll
    reads 
          Mfg.AssemblyEnt,
          Mfg.BomEnt,
          Mfg.ComponentEnt,
          Mfg.InventoryEnt,
          Mfg.LargeOrderEnt,
          Mfg.WorkOrderEnt,
          Supplier.POEnt,
          Supplier.POLineEnt,
          Supplier.SComponentEnt,
          Supplier.SupplierCompEnt,
          Supplier.SupplierEnt
    creates
          Mfg.WorkOrderEnt,
          Supplier.POEnt,
          Supplier.POLineEnt
    updates
          Mfg.InventoryEnt,
          Mfg.WorkOrderEnt,
          Supplier.SComponentEnt
    deletes
          Mfg.LargeOrderEnt
    end transaction
transaction Mfg.WorkOrderSes.updateWorkOrder
    calls 
          Mfg.WorkOrderEnt.ejbFindByPrimaryKey
    reads
          Mfg.WorkOrderEnt
    updates 
          Mfg.WorkOrderEnt
    end transaction
transaction Supplier.ReceiverSes.deliverPO
    calls 
          Supplier.SComponent.ejbFindByPrimaryKey,
          Supplier.POEnt.ejbFindByPrimaryKey,
          Supplier.POLineEnt.ejbFindByPrimaryKey,
          Mfg.Component.ejbFindByPrimaryKey,
          Mfg.InventoryEnt.ejbFindByPrimaryKey
    reads 
          Mfg.ComponentEnt,
          Mfg.InventoryEnt,
          Supplier.SComponent,
          Supplier.POEnt,
          Supplier.POLineEnt
    creates 
          Mfg.InventoryEnt
    updates 
          Supplier.SComponent,
          Supplier.POEnt,
          Supplier.POLineEnt,
          Mfg.InventoryEnt
    end transaction 
transaction Orders.CartSes.buy
    calls 
          Corp.CustomerEnt.ejbFindByPrimaryKey,
          Corp.DiscountEnt.ejbFindByPrimaryKey,
          Corp.RuleEnt.ejbFindByPrimaryKey,
          Orders.ItemEnt.ejbFindByPrimaryKey
    reads 
          Corp.CustomerEnt,
          Corp.DiscountEnt,
          Corp.RuleEnt,
          Mfg.LargeOrderEnt,
          Orders.ItemEnt
    creates 
          Mfg.LargeOrderEnt,
          Orders.OrderEnt,
          Orders.OrderLineEnt
    end transaction
transaction Orders.OrderCustomerSes.addCustomer
    calls 
          Corp.RuleEnt.ejbFindByPrimaryKey,
          Orders.OrderCustomerEnt.ejbFindByPrimaryKey
    creates 
          Corp.CustomerEnt,
          Orders.OrderCustomerEnt
    end transaction
transaction Orders.OrderCustomerSes.validateCustomer
    calls 
          Orders.OrderCustomerEnt.ejbFindByPrimaryKey 
    end transaction
transaction Orders.OrderSes.cancelOrder
    calls 
          Orders.OrderEnt.ejbFindByPrimaryKey,
          Orders.OrderLineEnt.ejbFindByOrder
    reads 
          Orders.OrderEnt,
          Orders.OrderLineEnt
    deletes 
          Orders.OrderEnt,
          Orders.OrderLineEnt
    end transaction
transaction Orders.OrderSes.changeOrder
    calls
          Corp.CustomerEnt.ejbFindByPrimaryKey,
          Corp.DiscountEnt.ejbFindByPrimaryKey,
          Corp.RuleEnt.ejbFindByPrimaryKey,
          Orders.ItemEnt.ejbFindByPrimaryKey,
          Orders.OrderEnt.ejbFindByPrimaryKey,
          Orders.OrderLineEnt.ejbFindByOrderAndItem
    reads 
          Corp.CustomerEnt,
          Corp.DiscountEnt,
          Corp.RuleEnt,
          Orders.ItemEnt,
          Orders.OrderEnt,
          Orders.OrderLineEnt
    creates 
          Orders.OrderLineEnt
    updates
          Orders.OrderEnt,
          Orders.OrderLineEnt
    deletes 
          Orders.OrderLineEnt
    end transaction
transaction Orders.OrderSes.getCustomerStatus
    calls 
          Orders.OrderEnt.ejbFindByCustomer,
          Orders.OrderLineEnt.ejbFindByOrder
    reads
          Orders.OrderEnt,
          Orders.OrderLineEnt
end transaction

transaction Orders.OrderSes.getOrderStatus
    calls 
          Orders.OrderEnt.ejbFindByPrimaryKey,
          Orders.OrderLineEnt.ejbFindByOrder
    reads 
          Orders.OrderEnt,
          Orders.OrderLineEnt
    end transaction
transaction Orders.OrderSes.newOrder
    calls 
          Corp.CustomerEnt.ejbFindByPrimaryKey,
          Corp.DiscountEnt.ejbFindByPrimaryKey,
          Corp.RuleEnt.ejbFindByPrimaryKey,
          Orders.ItemEnt.ejbFindByPrimaryKey
    reads 
          Corp.CustomerEnt,
          Corp.DiscountEnt,
          Corp.RuleEnt,
          Mfg.LargeOrderEnt,
          Orders.ItemEnt
    creates 
          Mfg.LargeOrderEnt,
          Orders.OrderEnt,
          Orders.OrderLineEnt
    end transaction
transaction Supplier.POEnt.ejbCreate
    creates 
            Supplier.POEnt,
            Supplier.POLineEnt
    end transaction
transaction Util.SequenceEnt.nextSequenceBlock
    reads 
          Util.SequenceEnt
    updates
          Util.SequenceEnt
    end transaction

Appendix B - Centralized Workload Category Examples

This appendix contains examples of systems for the different SPECjAppServer2001 centralized workload categories. These categories are defined in section 3.5.3 of this document. This is not an all inclusive list, but rather an example of what sort of systems fall into different categories.

B.1 Single Node Systems

Configurations allowed:

Configurations not allowed:

B.2 Dual Node Systems

Configurations allowed:

Configurations not allowed:

B.3 Multiple Node Systems

Configurations allowed:

Configurations not allowed:


Appendix C - Challenging Results Based on Supplied Pricing Information

The Subcommittee will use the following processes for handling challenges related to pricing.

Anyone may challenge the supplied pricing information for any result that has been published on SPEC's website and for which all components are listed as generally available during the first 90 days after the later of either the publication date or the general availability date for that result.

The burden of proof that the supplied pricing information is inaccurate is on the member who is challenging the result. This proof must be documented in writing or email and sent to the result's submitter and the subcommittee. Note: To facilitate resolution of the challenge, the challenger may as a courtesy inform the submitter of the challenge prior to notifying the subcommittee.

The submitter is expected to acknowledge receipt of the challenge within 5 working days once the submitter and the subcommittee are notified.

The subcommittee will review materials from the challenging and challenged parties and vote on a proposal to undertake a re-review of the challenged pricing information. If the proposal passes, the subcommittee will undertake the re-review and determine if original pricing is inaccurate.

Prior to the start of the re-review, the submitter may acknowledge the validity of the challenge by re-submitting with corrected pricing. The re-submission will be handled in the next available review cycle.

If the original pricing is declared inaccurate by either the admission of the submitter or the decision of the subcommittee, then the original result will be marked Not-Available (refer to Section 2.3.3 of the OSG Policy Document) and the reason for change will be noted on the page and the submitter may request a note on the resolution be added as well.

The supplied pricing information may also be challenged during the review phase if all components are listed as generally available at that time. The process and requirements are the same as shown above. The result in review will not be published until the subcommittee has resolved any open issues.

If the submitter of the result in question wishes to appeal the subcommittee's decision to the OSSC, they may do so. However, the OSSC may choose to employ an Independent third party to verify the current pricing from the submitted pricing disclosure. The independent third party selected by the OSSC may be selected from available Independent Public Accountants, consulting firms specializing in benchmark audits, or other parties not directly affiliated with the submitter or their competitors. The OSSC, at its discretion, may choose to require that the submitter pay SPEC for the costs involved in the audit.


Appendix D - Run Rules Document Change Log

2002/09/20 - Updated section 3.4 and section 4.3.1 to correct an inconsistency in the general availability information.

2002/10/18 - Updated section 4.2.3 to provide more detail on software support requirements.



Copyright (c) 2002 Standard Performance Evaluation Corporation