SPECjAppServer2002 Run and Reporting Rules Version 1.01 |
Section 2 - Running SPECjAppServer2002
Appendix A - SPECjAppServer2002 Transactions
Appendix B - Centralized Workload Category Examples
Appendix C - Challenging Results Based on Supplied Pricing Information
This document specifies how the SPECjAppServer2002 benchmark is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by SPEC. This ensures that results generated with this benchmark are meaningful, comparable to other generated results, and are repeatable (with documentation covering factors pertinent to duplicating the results).
Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.
The general philosophy behind the rules for running the SPECjAppServer2002 benchmark is to ensure that an independent party can reproduce the reported results.
For results to be publishable, SPEC expects:
SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. However, with the rules below, SPEC wants to increase the awareness by implementers and end users of issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.
Results must be reviewed and approved by SPEC prior to public disclosure. The submitter must have a valid SPEC license for this benchmark to submit results. Furthermore, SPEC expects that any public use of results from this benchmark shall follow the SPEC OSG Fair Use Policy and those specific to this benchmark (see section 3.7.3 below). In the case where it appears that these guidelines have been violated, SPEC may investigate and request that the offense be corrected or the results resubmitted.
SPEC reserves the right to modify the benchmark codes, workloads, and rules of SPECjAppServer2002 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees whenever it makes changes to the benchmark and may rename the metrics. In the event that the workload or metric is changed, SPEC reserves the right to republish in summary form adapted results for previously published systems.
The term Deployment Unit refers to an Enterprise Java Bean (EJB) Container or set of Containers in which the Beans from a particular domain are deployed.
The term ECclient refers to a thread or process that holds references to EJBs in a Deployment Unit. An ECclient does not necessarily map into a TCP connection to the Container.
The term SPECjAppServer2002 Reference Beans refers to the implementation of the EJBs provided for the SPECjAppServer2002 workload.
The term SPECjAppServer2002 Kit refers to the complete kit provided for SPECjAppServer2002. This includes the SPECjAppServer2002 Reference Beans, the Driver and load programs.
The term TOPS is the primary SPECjAppServer2002 metric and denotes the average number of successful Total Operations Per Second completed during the Measurement Interval. TOPS is composed of the total number of business transactions completed in the Customer Domain, added to the total number of workorders completed in the Manufacturing Domain, normalized per second.
The term Resource Manager is the software that manages a database and is the same as a Database Manager.
The following terms are defined in the Workload Description of the SPECjAppServer2002 Design Document:
All software components required to run the benchmark in the System Under Test (SUT) must be implemented by products that are generally available, supported and documented (see section 3.4 for general availability rules). These include but are not limited to:
The SUT must provide all application components with a runtime environment that meets the requirements of the Java 2 Platform, Enterprise Edition, (J2EE) Version 1.3 specification during the benchmark run. The SUT must meet the J2EE compatibility requirements and must be branded Java Compatible, Enterprise Edition.
Comment: A new version of a J2EE compatible product must have passed the J2EE Compatibility Test Suite (CTS) by the product's availability date. See section 3.4 for availability requirements.
The class files provided in the driver.jar file of the SPECjAppServer2002 kit must be used as is. No source code recompilation is allowed.
The source files provided for the SPECjAppServer2002 Reference Beans that require isolation level READ_COMMITTED (see section 2.10.4.3) in the SPECjAppServer2002 kit must be used as is (i.e., no source code modification is allowed). The remaining SPECjAppServer2002 Reference Bean source files may be updated to support DBMS or JDBC differences. These updates must be documented (see section 2.5).
The throughput of the SPECjAppServer2002 benchmark is driven by the activity of the OrderEntry and Manufacturing applications. The throughput of both applications is directly related to the chosen Injection Rate. To increase the throughput, the Injection Rate needs to be increased. The benchmark also requires a number of rows to be populated in the various tables. The scaling requirements are used to maintain the ratio between the ECtransaction load presented to the SUT, the cardinality of the tables accessed by the ECtransactions, the Injection Rate and the number of ECclients generating the load.
Database scaling is defined by the Orders Injection Rate Ir. The scaling is done as a step function and does not cause the database to increase linearly in size.
The cardinality (the number of rows in the table) of the Site and Supplier tables is fixed. The cardinality of the customer related tables, namely, customer, orders, orderline, and workorder will increase as a step function C, which is defined as:
C = the smallest multiple of 10 >= Ir
For example, if Ir = 72 then C = 80.
The cardinality of the item related tables, namely, Parts, BOM, Inventory and Item will increase as a step function P, which is defined as:
P = smallest multiple of 100 >= Ir
For example, if Ir = 72 then P = 100.
The following scaling requirements represent the initial configuration of the tables in the various domains, where C and P are defined above.
Domain | Table Name | Cardinality (in rows) | Comments |
---|---|---|---|
Corporate | |||
C_Site | 1 | ||
C_Supplier | 10 | ||
C_Customer | 75 * C | ||
C_Rule | 1 | ||
C_Discount | 6 | ||
C_Parts | (11 * P)1 | P Assemblies + 10 * P Components | |
Orders | |||
O_Customer | 75 * C | NUM_CUSTOMERS | |
O_Item | P | NUM_ITEMS | |
O_Orders | 75 * C | ||
O_Orderline | (225 * C)1 | Avg. of 3 per order | |
Manufacturing | |||
M_Parts | (11 * P)1 | ||
M_BOM | (10 * P)1 | ||
M_Workorder | P | ||
M_Inventory | (11 * P)1 | ||
Supplier | |||
S_Site | 1 | ||
S_Supplier | 10 | ||
S_Component | (10 * P)1 | Avg. of 10 components per assembly | |
S_Supp_Component | (100 * P)1 | ||
S_PurchaseOrder | (.2 * P) 1 | 2% of components | |
S_PurchaseOrderLine | P1 | Avg. of 5 per purchase order |
1. These sizes may vary depending on actual random numbers generated.
To satisfy the requirements of a wide variety of customers, the SPECjAppServer2002 benchmark can be run in Centralized or Distributed mode. The SUT consists of one or more nodes, which number is freely chosen by the implementer. The entire number of databases and EJB Containers can be mapped to nodes as required. The implementation must not, however, take special advantage of the co-location of databases and EJB Containers, other than the inherent elimination of WAN/LAN traffic.
In the Centralized version of the workload, all the 4 domains are allowed to be combined. This means that the benchmark implementer can choose to run a single Deployment Unit that accesses a single database that contains the tables of all the domains. However, a benchmark implementer is free to separate the domains into their Deployment Units and still run a single database. There are no requirements for XA 2-phase commits in the Centralized workload.
The Distributed version of the workload is intended to model application performance where the world-wide enterprise that SPECjAppServer2002 models performs transactions across business domains employing heterogeneous resource managers. In this model, the workload requires a separate Deployment Unit and a separate DBMS instance for each domain. XA-compliant recoverable 2-phase commits (see The Open Group XA Specification: http://www.opengroup.org/public/pubs/catalog/c193.htm) are required in ECtransactions that span multiple domains. The configuration for this 2-phase commit is required to be done in a way that would support heterogeneous systems. Even though implementations are likely to use the same Resource Manager for all the domains, the EJB Servers/Containers and Resource Managers cannot take advantage of the knowledge of homogeneous Resource Managers to optimize the 2-phase commits.
To stress the ability of the Container to handle concurrent sessions, the benchmark requires a minimum number of ECclients equal to 5 * Ir where Ir is the chosen Injection Rate. The number doesn't change over the course of a benchmark run.
For each new order, the customer to use is defined in the SPECjAppServer2002 Design Document, section 3.7, where nCust = 100 * C. For example, if Ir = 100, the database is initially populated with 7500 customers (NUM_CUSTOMERS), and nCust = 10000.
The Manufacturing Application scales in a similar manner to the OrderEntry Application. Since the goal is just-in-time manufacturing, as the number of orders increase a corresponding increase in the rate at which widgets are manufactured is required. This is achieved by increasing the number of Planned Lines p proportionally to Ir as
p = 3 * Ir
Since the arrival of large orders automatically determines the LargeOrder Lines, nothing special needs to be done about these.
All tables must have the properly scaled number of rows as defined by the database population requirements (see section 2.3).
Additional database objects or DDL modifications made to the reference schema scripts in the schema/sql directory in the SPECjAppServer2002 Kit must be disclosed along with the specific reason for the modifications. The base tables and indexes in the reference scripts cannot be replaced or deleted. Views are not allowed. The data types of fields can be modified provided they are semantically equivalent to the standard types specified in the scripts.
Comment 1: Replacing char with varchar would be considered semantically equivalent. Changing the size of a field (for example: increasing the size of a char field from 8 to 10) would not be considered semantically equivalent. Replacing char with integer (for example: zip code) would not be considered semantically equivalent.
Modifications that a customer may make for compatibility with a particular database server are allowed. Changes may also be necessary to allow the benchmark to run without the database becoming a bottleneck, subject to approval by SPEC. Examples of such changes include:
In any committed state the primary key values must be unique within each table. For example, in the case of a horizontally partitioned table, primary key values of rows across all partitions must be unique.
The databases must be populated using the load programs provided as part of the SPECjAppServer2002 kit. The load programs use standard SQL INSERT statements and loads all the tables via JDBC and so should work unchanged across all DBMS. However, modifications are permitted for porting purposes. All such modifications made must be disclosed in the Submission File.
The Test Sponsor must run the SPECjAppServer2002 Reference Beans. The SPECjAppServer2002 Reference Beans come in both CMP and BMP versions. The Sponsor can choose to deploy either CMP or BMP or a mix of both. See section 2.2 for Container requirements.
The only changes allowed to the SPECjAppServer2002 Reference Beans are in the classes that implement the bean-managed persistence (BMP). The only changes allowed to the BMP code are for porting changes, similar to section 2.4. All code modifications must appear in the Submission File (see section 2.10.4.3), along with an explanation for the changes.
The deployment descriptors supplied with the SPECjAppServer2002 Reference Beans must be used without any modifications. If deploying in CMP mode, all the finder methods implemented in the BMP code must be specified in the deployment descriptors with the same SQL semantics and implemented transparently by the Container.
Comment: Transparent implementation of the finder methods implies that the Container must generate the code for these methods automatically.
Commit Option A, specified in section 9.1.10 of the EJB 1.1 specification is not allowed. It is assumed that the database(s) could be modified by external applications.
Optimizations used to avoid ejbStore operations on entity beans are allowed only if the deployer does not need knowledge of the internal implementation of the SPECjAppServer2002 Reference Beans. If such optimizations are not transparent to the deployer, they must be disclosed.
Comment: The intent of this section is to encourage ejbStore optimizations to be done automatically by the container.
The OrderEntry Driver repeatedly performs business transactions in the Customer Domain. Business transactions are selected by the Driver based on the mix shown in Table 2. Since the benchmark is intended to test the transaction handling capabilities of EJB Containers, the mix is update intensive. In the real-world, there may be more readers than writers.
The actual mix achieved in the benchmark must be within 5% of the targeted mix for each type of transaction. For example, the newOrder transactions can vary between 47.5% to 52.5% of the total mix. The Driver checks and reports on whether the mix requirement was met.
Transaction Type | Percent Mix |
---|---|
newOrder | 50% |
getOrderStatus | 20% |
changeOrder | 20% |
getCustStatus | 10% |
The OrderEntry Driver measures and records the Response Time of the different types of business transactions. Only successfully completed business transactions in the Measurement Interval are included. At least 90% of the business transactions of each type must have a Response Time of less than the constraint specified in Table 3 below. The average Response Time of each transaction type must not be greater than 0.1 seconds more than the 90% Response Time. This requirement ensures that all users will see reasonable response times. For example, if the 90% Response Time of newOrder transactions is 1 second, then the average cannot be greater than 1.1 seconds. The Driver checks and reports on whether the response time requirements were met.
Transaction Type | 90% RT (in seconds) |
---|---|
newOrder | 2 |
getOrderStatus | 2 |
changeOrder | 2 |
getCustStatus | 2 |
For each business transaction, the OrderEntry Driver selects cycle times from a negative exponential distribution, computed from the following equation, such that the maximum average injection rate chosen can be achieved as best possible.
Tc = -ln(x) / Ir
where:
ln = natural log (base e) x = random number with at least 31 bits of precision, from a uniform distribution such that (0 < x <= 1) Ir = mean Injection Rate
The distribution is truncated at 5 times the mean. For each business transaction, the Driver measures the Response Time Tr and computes the Delay Time Td as Td = Tc - Tr. If Td > 0, the Driver will sleep for this time before beginning the next transaction. If the chosen cycle time Tc is smaller than Tr, then the actual cycle time (Ta) is larger than the chosen one. The average actual cycle time is allowed to deviate from the targeted one by 5%. The Driver checks and reports on whether the cycle time requirements were met.
The table below shows the range of values allowed for various quantities in the OrderEntry application. The Driver will check and report on whether these requirements were met.
Quantity | Targeted Value | Min. Allowed | Max. Allowed |
---|---|---|---|
Widget Ordering Rate/sec | 14.25 * Ir | 13.54 * Ir | 14.96 * Ir |
LargeOrder Widget Ordering Rate/sec | 7.5 * Ir | 7.13 * Ir | 7.88 * Ir |
RegularOrder Widget Ordering Rate/sec | 6.75 * Ir | 6.41 * Ir | 7.09 * Ir |
% Large Orders | 10 | 9.5 | 10.5 |
% Orders thru Cart | 50 | 47.5 | 52.5 |
% ChgOrders that were delete | 10 | 9.0 | 11.0 |
The Metric for the Customer Domain is Transactions/sec, composed of the total count of all business transaction types successfully completed during the measurement interval divided by the length of the measurement interval in seconds.
The Manufacturing Driver measures and records the time taken for a workorder to complete. Only successfully completed workorders in the Measurement Interval are included. At least 90% of the workorders must have a Response Time of less than 5 seconds. The average Response Time must not be greater than 0.1 seconds more than the 90% Response Time.
The table below shows the range of values allowed for various quantities in the Manufacturing Application. The Manufacturing Driver will check and report on whether the run meets these requirements.
Quantity | Targeted Value | Min. Allowed | Max. Allowed |
---|---|---|---|
LargeOrderline Widget Rate/sec | 6.75 * Ir | 6.075 * Ir | 7.425 * Ir |
Planned Line Widget Rate/sec | 6.75 * Ir | 6.075 * Ir | 7.425 * Ir |
The metric for the Manufacturing Domain is Workorders/sec, whether produced on the Planned lines or on the LargeOrder lines.
The Driver is provided as part of the SPECjAppServer2002 kit. Sponsors are required to use this Driver to run the SPECjAppServer2002 benchmark.
The Driver communicates with the SUT using the RMI interface over a protocol supported by the EJB Container, such as RMI/JRMP, RMI/IIOP, RMI/T3, etc.
The Driver must reside on system(s) that are not part of the SUT.
Comment: The intent of this section is that the communication between the Driver and the SUT be accomplished over the network.
The Driver system(s) must use a single URL to establish communication with the Container in the case of the Centralized Workload and 4 URLs (one per Domain) in the case of the Distributed Workload.
EJB object stubs invoked by the Driver on the Driver system(s) are limited to data marshalling functions, load-balancing and failover capabilities. Pre-configured decisions, based on specific knowledge of SPECjAppServer2002 and/or the benchmark configuration are disallowed.
The Driver system(s) may not perform any processing ordinarily performed by the SUT, as defined in section 2.12. This includes, but is not limited to:
The Driver records all exceptions in error logs. The only expected errors are those related to transaction consistency when a transaction may occasionally rollback due to conflicts. Any other errors that appear in the logs must be explained in the Submission File.
The Orders and Manufacturing Application must be started simultaneously at the start of a benchmark run. The Measurement Interval must be preceded by a ramp-up period of at least 10 minutes at the end of which a steady state throughput level must be reached. At the end of the Measurement Interval, the steady state throughput level must be maintained for at least 5 minutes, after which the run can terminate.
The reported metric must be computed over a Measurement Interval during which the throughput level is in a steady state condition that represents the true sustainable performance of the SUT. Each Measurement Interval must be at least 30 minutes long and should be representative of an 8 hour run.
Comment: The intent is that any periodic fluctuations in the throughput or any cyclical activities, e.g. JVM garbage collection, database checkpoints, etc. be included as part of the Measurement Interval.
To demonstrate the reproducibility of the steady state condition during the Measurement Interval, a minimum of one additional (and non-overlapping) Measurement Interval of the same duration as the reported Measurement Interval must be measured and its TOPS must be greater than the reported TOPS. This reproducibility run's metric is required to be within 5% of the reported TOPS.
The Atomicity, Consistency, Isolation and Durability (ACID) properties of transaction processing systems must be supported by the system under test during the running of this benchmark.
The system under test must guarantee that database transactions are atomic; the system will either perform all individual operations on the data, or will assure that no partially-completed operations leave any effects on the data.
a. Choose a customer who has a bad credit by looking in the C_CUSTOMER table for a customer with the C_CREDIT field equal to `BC'.
b. Modify the debug level in the OrderEnt bean deployment to 4 so the code will print the order ID as soon as it generates it.
c. Enter a new order for this customer using the web client application, distributed as part of the SPECjAppServer2002 Kit. Note the order ID printed by the bean code. The transaction should fail generating an InsufficientCreditException.
d. Retrieve the status of the noted order ID in step c. The order should not exist.
e. Query the database table O_ORDERLINE for rows where OL_O_ID match the order ID printed in step c. There should be no rows returned.
a. Choose a customer with good credit by looking in the C_CUSTOMER table for a customer with the C_CREDIT field equal to `GC'.
b. Enter a new order for this customer using the web client application. The transaction should succeed. Note the order ID returned.
c. Retrieve the status of the noted order ID above. The order along with the orderlines entered in step b. should be displayed.
In addition to performing atomicity test 1 and 2 above, a third test must be performed as described below.
a. Modify the debug level in the OrderEnt bean deployment to 4 so the code will print the order ID and orderLine IDs as soon as it generates it.
b. Do the same for the LargeOrderEnt bean in the Manufacturing Domain.
c. Change the LargeOrderEnt bean in the Manufacturing Domain to add the following to the ejbStore method: entityContext.setRollBackOnly();
d. Enter a new order for any customer, ensuring that it is a largeorder. Note the values of the order ID, orderLine IDs and largeOrder ID displayed.
e. The transaction should rollback. Verify that the rows in the O_orders, O_orderline and M_largeorder table with the IDs retrieved in the above step do not exist.
This section describes the transaction isolation and consistency requirements. One can choose to implement the requirements in this section by any mechanism supported by the Container.
All ECtransactions must take the database from one consistent state to another. The various isolation levels are described in ANSI SQL and J2SE documentation for java.sql.Connection. The isolation levels are also described in the TPC-C specification available from http://www.tpc.org. All database transactions must have an isolation level of READ_COMMITTED or higher; i.e. dirty reads are not allowed.
For the purposes of specifying consistency, we use the following logical isolation levels:
These isolation levels are semantically equivalent to the ANSI SQL isolation levels but are defined on a per-entity basis. The logical isolation levels do not imply the use of the corresponding database isolation level. For example, it is possible to use the READ_COMMITTED database isolation level and optimistic techniques such as verified finders, reads, updates and deletes, or pessimistic locking using SELECT FOR UPDATE type semantics to implement these logical isolation levels.
Comment 1: If an entity is deployed with a logical isolation of REPEATABLE_READ (or higher), it must be ensured that in any transaction where this entity is read, updated or deleted, the transaction will never be committed if the entity was updated or deleted in the database (by another committed transaction) since it was first read by the transaction. Note that optimizations to avoid database updates to entities that have not been changed in a given transaction are not valid if the suppression of updates results in an effective isolation level lower than REPEATABLE_READ. Additionally, if the container pre-loads entity state while executing finder methods (to avoid re-selecting the data at ejbLoad time), the mechanism normally used to ensure REPEATABLE_READ must still be effective, unless another mechanism is provided to ensure REPEATABLE_READ in this case. For example, if SELECT FOR UPDATE would normally be used at ejbLoad time, then SELECT FOR UPDATE should be used when executing those finder methods which pre-load entity state.
Comment 2: If database isolation level is used to implement the logical isolation level, it should be set to the highest logical isolation level of all the entities participating in the transaction. See Appendix A for a description of the entities accessed in each of the transactions.
Comment 3: If an entity is deployed with a logical isolation of READ_COMMITTED, and if that entity is not changed in a given transaction, then the container must not issue a database update that would have the effect of losing any external updates that are applied while the transaction is executing. If the container does not have the ability to suppress unnecessary updates that could interfere with external updates, then all entities must be deployed using the REPEATABLE_READ isolation level (or higher).
In all cases where a logical isolation level is specified, this is the minimum required. Use of a higher logical isolation level is permitted.
The following entities are infrequently updated with no concurrent updates and can be configured to run with a logical isolation level of READ_COMMITTED:
All other entities must run with a logical isolation level of REPEATABLE_READ.
Comment: In order to preserve referential integrity between OrderEnt and OrderLineEnt, all access to order lines within a given transaction is preceded by access to the corresponding order.
The method used to achieve the requirements in this section must be disclosed.
Comment 1: The BMP implementation of the SPECjAppServer2002 Reference Beans uses optimistic techniques for all entities that must be run with the REPEATABLE_READ isolation level.
Comment 2: Transaction rollbacks caused by conflicts when using concurrency control techniques are permitted.
Transactions must be durable from any single point of failure on the SUT. In particular, distributed 2-Phase Commit transactions must be durable. Durability implies that all committed transactions before the failure must be recoverable.
Comment: Durability from a single point of failure can be achieved by ensuring that there is a backup device (disk or tape) for the database and the logs can withstand a single point of failure. This is typically implemented by mirroring the logs onto a separate set of disks.
The Supplier Emulator is provided as part of the SPECjAppServer2002 Kit and can be deployed on any Web Server that supports Servlets 2.1.
The Supplier Emulator must reside on system(s) that are not part of the SUT. The Supplier Emulator may reside on one of the Driver systems.
Comment: The intent of this section is that the communication between the Supplier Emulator and the SUT be accomplished over the network.
The SUT comprises all components which are being tested. This includes network connections, Application Servers/Containers, Database Servers, etc.
The SUT consists of:
Comment 1: Any components which are required to form the physical TCP/IP connections (commonly known as the NIC, Network Interface Card) from the host system(s) to the client machines are considered part of the SUT.
Comment 2: A basic configuration consisting of one or more switches between the Driver and the SUT is not considered part of the SUT. However, if any software/hardware is used to influence the flow of traffic beyond basic IP routing and switching, it is considered part of the SUT. For example, if DNS Round Robin is used to implement load balancing the DNS server is considered part of the SUT and so can not run on a driver client.
The SUT services remote method calls from the Driver and returns results generated by the SPECjAppServer2002 Reference Beans which may involve information retrieval from a RDBMS. The database must be accessed only from the SPECjAppServer2002 Reference Beans (or the Container acting on behalf of a bean), using JDBC.
The SUT must not perform any caching operations beyond those normally performed by the servers (EJB Containers, Database Servers etc.) which are being used.
Comment: The intention is to allow EJB Container and Database Server caching to work normally but not to allow the implementation to take advantage of the limited nature of the benchmark and to cache information which would normally be retrieved from the Servers.
Any software that is required to build and deploy the SPECjAppServer2002 Reference Beans is considered part of the SUT.
The SUT must have sufficient on-line disk storage to support any expanding system files and the durable database population resulting from executing the SPECjAppServer2002 transaction mix for 8 hours at the reported TOPS.
The primary metric of the SPECjAppServer2002 benchmark is Total Operations Per Second (TOPS).
The overall metric for the SPECjAppServer2002 benchmark is calculated by adding the metrics of the OrderEntry Application in the Customer Domain and the Manufacturing Application in the Manufacturing Domain as:
TOPS = Transactions/sec + Workorders/sec
All reported TOPS must be measured, rather than estimated, and expressed to exactly two decimal places, rounded to the hundredth place.
The performance metric must be reported with the category of the SUT that was used to generate the result (i.e., @SingleNode, @DualNode, @MultipleNode, @Distributed). See section 3.5 for a description of the categories. For example, if a measurement yielded 123.45 TOPS on a Single Node, this must be reported as 123.45 TOPS@SingleNode.
The frequency distribution of the Response Times of all business transactions in the Customer and Manufacturing Domains, started and completed during the Measurement Interval must be reported in a graphical format. In each graph, the x-axis represents the Response Time and must range from 0 to five times the required 90th percentile Response Time (N). This 0 to 5N range must be divided into 100 equal length intervals. One additional interval will include the Response Time range from 5N to infinity. All 101 intervals must be reported. The y-axis represents the frequency of each type of business transaction at a given Response Time range with a granularity of at least 10 intervals. An example of such a graph is shown below.
FIGURE 1: Sample Response Time Graph
A graph of the workorder throughput versus flashed time (i.e., wall clock time) must be reported for the Manufacturing Application for the entire Test Run. The x-axis represents the elapsed time from the start of the run. The y-axis represents the throughput in business transactions. At least 60 different intervals must be used with a maximum interval size of 30 seconds. The opening and the closing of the Measurement Interval must also be reported. An example of such a graph is shown below.
FIGURE 2: Sample Throughput Graph
Benchmark specific optimization is not allowed. Any optimization of either the configuration or products used on the SUT must improve performance for a larger class of workloads than that defined by this benchmark and must be supported and recommended by the provider. Optimizations must have the vendor's endorsement and should be suitable for production use in an environment comparable to the one represented by the benchmark. Optimizations that take advantage of the benchmark's specific features are forbidden. Examples of inappropriate optimization include, but are not limited to, taking advantage of the specific SQL code used, the sizes of various fields or tables, or the number of beans deployed in the benchmark.
All hardware and software used must be orderable by customers. For any product not already generally released, the Submission File must include a committed general delivery date. That date must not exceed 3 months beyond the Full Disclosure submittal date. However, if Java and/or J2EE related licensing issues cause a change in software availability date after publication date, the change will be allowed to be made without penalty, subject to subcommittee review.
All products used must be the proposed final versions and not prototypes. When the product is finally released, the product performance must not decrease by more than 2% of the published TOPS. If the submitter later finds the performance of the released system to be 2% lower than that reported for the pre-release system, then the submitter is requested to report a corrected test result.
Comment 1: The intent is to test products that customers will use, not prototypes. Beta versions of products can be used, provided that General Availability (GA) is within 3 months.
Comment 2: The 2% degradation limit only applies to a difference in performance between the tested product and the GA product. Subsequent GA releases (to fix bugs, etc.) are not subject to this restriction.
This section describes how the SPECjAppServer2002 results are categorized. Any given configuration on which results are reported will belong to a single category. In the event of ambiguity as to which category a particular result belongs to, the SPEC Java Review committee will determine the category that most closely reflects the intent of the rules in this section.
Comparison across different categories is a violation of SPEC Fair Use Rules, see section 3.7.3.
Categories are defined in terms of 'Nodes'. A Node is defined as a system running a single OS image.
System CPUs running a coherent memory system can read or write to the same memory subsystem as other processors in the configuration.
The Centralized workload is defined in section 2.3.3. Results reported using this workload fall into one of the three categories as defined below:
Configurations in which a single Node with a coherent memory system running both the Application Server and Database instances fall into the Single Node System category. Distributed operating system clusters are not allowed in this category.
Configurations in which two Nodes with coherent memory systems, one running the Application Server and the other running the Database fall into the Dual Node System category. Distributed operating system clusters are not allowed in this category.
Configurations containing three or more Nodes fall into the Multiple Node System category. Also, systems running a distributed operating system cluster or non-coherent memory system fall into this category.
The Distributed workload is defined in section 2.3.3. All distributed workload configurations fall into a 'Distributed System' category.
In order to publicly disclose SPECjAppServer2002 results, the submitter must adhere to these reporting rules in addition to having followed the run rules described in this document. The goal of the reporting rules is to ensure the system under test is sufficiently documented such that someone could reproduce the test and its results.
Compliant runs need to be submitted to SPEC for review and approval prior to public disclosure. Submissions must include the Submission File, a Configuration Diagram, and the Full Disclosure Archive for the run (see section 5.1). See section 5.3 of the SPECjAppServer2002 User Guide for details on submitting results to SPEC.
Test results that have not been approved and published by SPEC must not use the SPECjAppServer metrics (TOPS and Price/TOPS) in public disclosures.
SPECjAppServer2002 results must always be quoted using the performance metric, the price/performance metric, and the category in which the results were generated.
Estimates are not allowed.
SPECjAppServer2002 results must not be publicly compared to results from any other benchmark. This would be a violation of the SPECjAppServer2002 Run and Reporting Rules and, in the case of the TPC benchmarks, a serious violation of the TPC "fair use policy."
Results between different categories (see section 3.5) within SPECjAppServer2002 may not be compared; any attempt to do so will be considered a violation of SPEC Fair Use Rules.
Performance comparisons may be based only upon the SPEC defined metrics (SPECjAppServer2002 TOPS@Category or Price/TOPS@Category). Other information from the result page may be used to differentiate systems, i.e. used to define a basis for comparing a subset of systems based on some attribute like number of CPU's or memory size.
Conversions of the Price/TOPS@Category to other currencies for purpose of competitive comparison when intended for public view or use is strictly prohibited.
When competitive comparisons are made using SPECjAppServer2002 benchmark results, SPEC expects that the following template be used:
SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org as of (date). [The comparison presented is based on (basis for comparison).] For the latest SPECjAppServer2002 results visit http://www.spec.org/osg/jAppServer2002.(Note: [...] above required only if selective comparisons are used.)
Example:
SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org as of August 12, 2002. The comparison presented is based on best performing 4-CPU servers currently shipping by Vendor 1, Vendor 2 and Vendor 3. For the latest SPECjAppServer2002 results visit http://www.spec.org/osg/jAppServer2002.
The rationale for the template is to provide fair comparisons, by ensuring that:
SPEC encourages use of the SPECjAppServer2002 benchmark in academic and research environments. The researcher is responsible for compliance with the terms of any underlying licenses (Application Server, DB Server, hardware, etc.).
It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of licensees submitting to the SPEC web site. SPEC encourages researchers to obey as many of the run rules as practical, even for informal research. If research results are being published, SPEC requires:
SPEC reserves the right to ask for a full disclosure of any published results.
Public use of SPECjAppServer benchmark results are bound by the SPEC OSSC Fair Use Guidelines and the SPECjAppServer specific Run and Reporting Rules (this document). All publications must clearly state that these results have not been reviewed or approved by SPEC using text equivalent to this:
SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). The SPECjAppServer results or findings in this publication have not been reviewed or approved by SPEC, therefore no comparison nor performance inference can be made against any published SPEC result. The official web site for SPECjAppServer2002 is located at http://www.spec.org/osg/jAppServer2002.
This disclosure must precede any results quoted from the tests. It must be displayed in the same font as the results being quoted.
In addition to the performance metric defined in section 3.1, SPECjAppServer2002 includes a price/performance metric defined as Price/TOPS which is the total price of the SUT in the local currency divided by the reported TOPS.
The price/performance metric is rounded up (ceiling function) to the next available currency unit such that fractions of the lowest denomination of local currency are not allowed (e.g.,in the US it would be cents and fractions of a cent are not to be included in the metric). For example, if the total price is US$ 5,734,417 and the reported throughput is 105.12 TOPS, then the price/performance is US$ 54,551.16/TOPS (54,551.151 rounded up). The SPECjAppServer2002 reporter uses the total SUT cost and the TOPS to compute the metric for the report page.
In addition, for pricing challenge procedures see Appendix C.
The entire price of the SUT (see section 2.12) must be included, including all hardware, software, and support for a three year period.
All hardware components priced must be new and not reconditioned or previously owned, and the price must be the purchase price (no leasing). The software may use term limited pricing (i.e., software leasing), provided there are no additional conditions associated with the term limited pricing. If term limited pricing is used, the price must be for a minimum of three years. The three year support period must cover both hardware maintenance and software support.
The number of users for SPECjAppServer2002 is 8 * Ir (where 5 * Ir are Internet users and 3 * Ir are Intranet users). Any usage pricing for the above number of users should be based on the pricing policy of the company supplying the priced component.
Additional components such as operator consoles and backup devices must also be priced, if explicitly required for the operation, administration, or maintenance, of the SUT.
If software needs to be loaded from a particular device either during installation or for updates, the device must be priced.
Hardware maintenance and software support must be priced for 7 days/week, 24 hours/day coverage, either on-site, or if available as standard offering, via a central support facility.
If a central support facility is priced, then all hardware and software required to connect to the central support must be installed and functional on the SUT during the measurement run and priced.
The response time for hardware maintenance requests must not exceed 4 hours on any component whose replacement is necessary for the SUT to return to the tested configuration.
Pricing of spares in lieu of the hardware maintenance requirements is allowed if the part to be replaced can be identified as having failed by the customer within 4 hours. An additional 10% of the designated part, with a minimum of 2, must be priced. A support service for the spares which provides replacement on-site within 7 days must also be priced for the support period.
Software support requests must include problem acknowledgement within 4 hours. No additional charges will be incurred for the resolution of software defects. Problem resolution for more than 10 non-defect problems per year is permitted to incur additional charges. Software support must include all available maintenance updates over the support period.
The intent of the pricing rules is to price the tested system at the full price a customer would pay. Assumptions of other purchases made by this customer are not allowed. This is a one time, stand-alone purchase. For ease of benchmarking, the priced system may include hardware components that are different from the tested system, as long as the substituted components perform equivalently or better in the benchmark. Any substitutions must be disclosed in the price sheet. For example, disk drives with lower capacity or speed in the tested system can be replaced by faster ones in the priced configuration. However, it is not permissible to replace key components such as CPU, memory or any software.
See section 3.4.
All pricing used must be generally available (list) pricing. No form of special purpose discounts or promotional pricing is allowed.
It is permissible to use package pricing as long as this package is a standard offering. The entire package must be priced in the SUT.
Comment: The intent is to include a price that is available to any customer without the need to request a lower price.
All pricing should be in the currency unit which a customer in their country would use to pay for the system and reflect local retail pricing. Prices should be rounded up (ceiling function) to the least significant digit of the currency unit being used. For example, all U.S. pricing should be rounded up to the next cent.
All pricing sources and the effective date of the prices must be disclosed. For currently available components, the effective date is the submission date. For components not yet available, the price reported must be the initial General Availability (GA) price and the effective date is the GA date. Changes in component prices would require a resubmission (published results can not be updated).
Pricing must be guaranteed for 60 days from the later date of the component's GA or the result's publication. In addition revised pricing data must be provided to SPEC for the first 90 days should any components incur a price increase and alter price/performance standings (see section 5.6.6).
To facilitate the evaluation of pricing in SPEC benchmarks, the submitter agrees to provide a current pricing disclosure on March 14, 2003 to the Open Systems Steering Committee (email info@spec.org for contact info) if the price has increased by 2% or more. Otherwise on that date, the submitter need only send email to the OSSC stating that the current pricing is still valid. If a performance critical component is no longer available so that the pricing can not be reconfirmed, that will be conveyed to the OSSC.
Comment: Typographical changes necessary during review would not require a resubmission.
All items supplied by a third party (i.e. not the Test Submitter) must be explicitly stated. Each third party supplier's items and prices, must be listed separately.
Pricing shown in the Price Sheet must reflect the level of detail a customer would see on an itemized billing.
A Full Disclosure is required in order for results to be considered compliant with the SPECjAppServer2002 benchmark specification.
Comment 1: The intent of this disclosure is to be able to replicate the results of a submission of this benchmark given the equivalent hardware, software, and documentation.
Comment 2: In the sections below, when there is no specific reference to where the disclosure must occur, it must occur in the Submission File. Disclosures in the Archive are explicitly called out.
The term Full Disclosure refers to the information that must be provided when a benchmark result is reported.
The term Configuration Diagram refers to the picture in a common graphics format that depicts the configuration of the SUT. The Configuration Diagram is part of a Full Disclosure.
The term Full Disclosure Archive (or "Archive" for short) refers to the soft-copy archive of files that is part of a Full Disclosure.
The term Submission File refers to the ASCII file that contains the information specified in this section, to which the "result.props" file from the run must be appended. The Submission File is part of a Full Disclosure.
The term Benchmark Results Page refers to the report in HTML or ASCII format that is generated from the Submission File. The Benchmark Results Page is the format used when displaying results on the SPEC web site. The Benchmark Results Page in HTML format will provide a link to the Configuration Diagram and the Full Disclosure Archive.
A Configuration Diagram of the entire configuration (including the SUT, Supplier Emulator, and load drivers) must be provided in PNG, JPEG or GIF format. The diagram should include, but is not limited to:
The Full Disclosure Archive contains the following items:
The Archive must be in ZIP, TAR or JAR format.
All commercially available software products used must be identified in the Submission File. Settings, along with a brief description must be provided for all customer-tunable parameters and options which have been changed from the defaults found in actual products, including but not limited to:
The number of instances of the EJB Container and/or the number of JVMs used must be disclosed in the system.sw.EJB_Container.instances section of the Submission File.
The method by which adherence to section 18.2.3 of the EJB 1.1 specification ("Argument passing semantics") is assured must be disclosed in the benchmark.argument_passing_semantics section of the Submission File.
The version number of the SPECjAppServer2002 Kit used to run the benchmark must be included in the Submission File. The version number is written to the result.props file with the configuration and result information.
The date that the J2EE Compatible Product passed (or is expected to pass) the J2EE Compatibility Test Suite (CTS) must be disclosed in the system.sw.EJB_Container.date_passed_CTS section of the Submission File.
The Orders Injection Rate used to load the database(s) must be disclosed in the benchmark.load.injection_rate section of the Submission File.
The Full Disclosure Archive must include all table definition statements and all other statements used to set-up the database. The scripts used to create the database should be included in the Full Disclosure Archive under the "Schema" sub-directory.
If the schema was changed from the reference one provided in the Kit (see section 2.4), the reason for the modifications must be disclosed in the benchmark.schema_modifications section of the Submission File.
If the Load Programs in the SPECjAppServer2002 kit were modified (see section 2.4), all such modifications must be disclosed in the benchmark.load_program_modifications section of the Submission File and the modified programs must be included in the Full Disclosure Archive.
All scripts/programs used to create any logical volumes for the database devices must be included as part of the Full Disclosure Archive. The distribution of tables and logs across all media must be explicitly depicted.
The type of persistence, whether CMP, BMP or mixed mode used by the EJB Containers must be disclosed in the benchmark.persistence_mode_used section of the Submission File. If mixed mode is used, the list of beans deployed using CMP and BMP must be enumerated.
If the SPECjAppServer2002 Reference Beans were modified (see section 2.5), a statement describing the modifications must appear in the benchmark.reference_bean_modifications section of the Submission File and the modified code must be included in the Archive.
All Deployment Descriptors used must be included in the Full Disclosure Archive under the "Deploy" sub-directory. Any vendor-specific tools, flags or properties used to perform ejbStore optimizations that are not transparent to the user must be disclosed (see section 2.5) in the system.sw.EJB_Container.tuning section of the Submission File.
The TOPS from the reproducibility run must be disclosed (see section 2.9.2) in the result.reproducibility_run.tops section of the Submission File. The entire output directory from the reproducibility run must be included in the Full Disclosure Archive in a directory named "RepeatRun".
A graph, in PNG, JPEG or GIF format, of the frequency distribution of response times for all the transactions (see section 3.2) must be included in the Full Disclosure Archive.
A graph, in PNG, JPEG or GIF format, of the workorder throughput versus elapsed time (see section 3.2) must be included in the Full Disclosure Archive.
The scripts/programs used to run the Atomicity tests and their outputs must be included in the Full Disclosure Archive in a directory named "Atomicity".
The method used to meet the isolation requirements in section 2.10.4 must be disclosed in the benchmark.isolation_requirement_info section of the Submission File.
The method used to meet the durability requirements in section 2.10.5 must be disclosed in the benchmark.durability_requirement_info section of the Submission File.
All steps used to build and deploy the SPECjAppServer2002 Reference Beans must be disclosed in a file called "deployCmds.txt" within the "Deploy" sub-directory of the Full Disclosure Archive.
If the xerces.jar package in the jars sub-directory of the SPECjAppServer2002 Kit was not used, the reason for this should be disclosed in the benchmark.other section of the Submission File. The version and source of the actual package used should also be disclosed.
If any software/hardware is used to influence the flow of network traffic beyond basic IP routing and switching, the additional software/hardware and settings (see section 2.12) must be disclosed in the benchmark.other section of the Submission File.
The input parameters to the Driver must be disclosed by including the following files used to run the benchmark in the Full Disclosure Archive:
If the Launcher package was modified, its source must be included in the Full Disclosure Archive.
The bandwidth of the network(s) used in the tested configuration must be disclosed in the benchmark.other section of the Submission File.
The protocol used by the Driver to communicate with the SUT (e.g., RMI/IIOP) must be disclosed in the system.sw.EJB_Container.protocol section of the Submission File.
The hardware and software used to perform load balancing must be disclosed in the benchmark.other section of the Submission File. If the driver systems perform any load-balancing functions as defined in section 2.8, the details of these functions must also be disclosed.
The number and types of driver systems used, along with the number and types of processors, memory and network configuration must be disclosed in the system.hw section of the Submission File.
The version of the JDK used on the Driver system(s) must be disclosed in the system.hw.notes section of the Submission File.
Any errors that appear in the Driver error logs must be explained in the notes section of the Submission File.
The method used to meet the storage requirements of section 2.12.3 must be disclosed in the benchmark.storage_requirement_info section of the Submission File.
A detailed list of hardware and software used in the priced system must be reported in the Price Sheet along with a Statement of Responsibility for those prices. Each separately orderable item must have vendor part number (or unique identifier), description, release/revision level, and either general availability status or committed delivery date. If package-pricing is used, the unique identifier of the package and a description identifying each of the components of the package must be disclosed. Pricing sources and the effective date of the prices must also be reported.
The total price of the entire configuration must be reported, including: hardware, software, and maintenance charges. Separate component pricing is recommended.
The committed delivery date for general availability (availability date) of products used in the price calculations must be reported. When the priced system includes products with different availability dates, the reported availability date for the priced system must be the date at which all components are committed to be available.
For any usage pricing, the sponsor must disclose:
Comment: Usage pricing may include, but is not limited to, the operating system, EJB server and database server software.
System pricing should include subtotals for the following components: Server Hardware, Server Software, and Network Components used.
System pricing must include line item indication where non-submitting companies' brands are used. System pricing must also include line item indication of third party pricing.
The quoted statement below needs to be added to the disclosure:
"The following submitter: [name of submitter] guarantees the accuracy of the supplied pricing data for a minimum of 60 days from [date (later of general availablity or publication dates)], and accepts the responsiblity to provide revised pricing data to SPEC during the first 90 days should the price increase by 2% or more."
Comment: Typographical changes necessary during review would not require a resubmission.
This appendix lists all of the ECtransactions that begin a new transaction and identify the entities that are accessed in each transaction. This information can be used to implement the isolation and consistency requirements in section 2.10.4. In the case of any discrepancy between this section and the actual code in the SPECjAppServer2002 Kit, the transaction behavior in the Kit prevails.
The following grammar defines a declarative assertion language that is used to define the transactions in this section:
transaction ::= transaction method-name [ calls finder-list ] [ reads entity-list ] [ creates entity-list ] [ updates entity-list ] [ deletes entity-list ] end transaction method-name ::= ejb-jar-display-name . ejb-name . method-suffix entity-name ::= ejb-jar-display-name . ejb-name finder-name ::= entity-name . method-suffix finder-list ::= finder-name [, finder-name ]* entity-list ::= entity-name [, entity-name ]*
transaction method-name
Specifies a session bean or entity bean method that is used to initiate a transaction. The method either has the RequiresNew transaction attribute, or it has the Required transaction attribute and is sometimes (or always) invoked by its callers with no transaction context.
The method name may include a suffix for the purpose of differentiating overloaded methods.
calls finder-list
The calls clause specifies one or more entity bean finder methods that may be called by the transaction.
reads entity-list
The reads clause specifies one or more entities that may be read by the transaction. If an entity does not have a reads clause, it will definitely not be read by the transaction. The reads clause corresponds to the ejbLoad entity bean callback method.
creates entity-list
The creates clause specifies one or more entities that may be created by the transaction. If an entity does not have a creates clause, it will definitely not be created by the transaction. The creates clause corresponds to the ejbCreate entity bean methods.
updates entity-list
The updates clause specifies one or more entities that may be updated by the transaction. If an entity does not have a updates clause, it will definitely not be updated by the transaction.
The updates clause corresponds to the ejbStore entity bean callback method. The EJB 1.1 specification requires that if ejbCreate is called in a transaction to create an entity, that ejbStore must also be called by the container before transaction completion. The updates clause will however be omitted if the entity's cmp-fields are not modified outside of ejbCreate. It is valid for the container to make use of this information to avoid unnecessary database updates, although ejbStore must still be called on the bean class.
deletes entity-list
The deletes clause specifies one or more entities that may be deleted by the transaction. If an entity does not have a deletes clause, it will definitely not be deleted by the transaction. The deletes clause corresponds to the ejbRemove entity bean callback method.
transaction Mfg.LargeOrderSes.findLargeOrders calls Mfg.LargeOrderEnt.ejbFindAll reads Mfg.LargeOrderEnt end transaction
transaction Mfg.WorkOrderSes.completeWorkOrder calls Mfg.AssemblyEnt.ejbFindByPrimaryKey, Mfg.InventoryEnt.ejbFindByPrimaryKey, Mfg.WorkOrderEnt.ejbFindByPrimaryKey reads Mfg.AssemblyEnt, Mfg.InventoryEnt, Mfg.WorkOrderEnt updates Mfg.InventoryEnt, Mfg.WorkOrderEnt end transaction
transaction Mfg.WorkOrderSes.scheduleWorkOrder__String__int__Date calls Mfg.AssemblyEnt.ejbFindByPrimaryKey, Mfg.BomEnt.ejbFindBomForAssembly, Mfg.ComponentEnt.ejbFindByPrimaryKey, Mfg.InventoryEnt.ejbFindByPrimaryKey, Supplier.POLineEnt.ejbFindByPO, Supplier.POLineEnt.ejbFindByPrimaryKey, Supplier.SComponentEnt.ejbFindByPrimaryKey, Supplier.SupplierCompEnt.ejbFindByPrimaryKey, Supplier.SupplierEnt.ejbFindAll reads Mfg.AssemblyEnt, Mfg.BomEnt, Mfg.ComponentEnt, Mfg.InventoryEnt, Mfg.WorkOrderEnt, Supplier.POEnt, Supplier.POLineEnt, Supplier.SComponentEnt, Supplier.SupplierCompEnt, Supplier.SupplierEnt creates Mfg.WorkOrderEnt, Supplier.POEnt, Supplier.POLineEnt updates Mfg.InventoryEnt, Mfg.WorkOrderEnt, Supplier.SComponentEnt end transaction
transaction Mfg.WorkOrSes.scheduleWorkOrder__int__int__String__int__Date calls Mfg.AssemblyEnt.ejbFindByPrimaryKey, Mfg.BomEnt.ejbFindBomForAssembly, Mfg.ComponentEnt.ejbFindByPrimaryKey, Mfg.InventoryEnt.ejbFindByPrimaryKey, Mfg.LargeOrderEnt.ejbFindByOrderLine, Supplier.POLineEnt.ejbFindByPO, Supplier.POLineEnt.ejbFindByPrimaryKey, Supplier.SComponentEnt.ejbFindByPrimaryKey, Supplier.SupplierCompEnt.ejbFindByPrimaryKey, Supplier.SupplierEnt.ejbFindAll reads Mfg.AssemblyEnt, Mfg.BomEnt, Mfg.ComponentEnt, Mfg.InventoryEnt, Mfg.LargeOrderEnt, Mfg.WorkOrderEnt, Supplier.POEnt, Supplier.POLineEnt, Supplier.SComponentEnt, Supplier.SupplierCompEnt, Supplier.SupplierEnt creates Mfg.WorkOrderEnt, Supplier.POEnt, Supplier.POLineEnt updates Mfg.InventoryEnt, Mfg.WorkOrderEnt, Supplier.SComponentEnt deletes Mfg.LargeOrderEnt end transaction
transaction Mfg.WorkOrderSes.updateWorkOrder calls Mfg.WorkOrderEnt.ejbFindByPrimaryKey reads Mfg.WorkOrderEnt updates Mfg.WorkOrderEnt end transaction
transaction Supplier.ReceiverSes.deliverPO calls Supplier.SComponent.ejbFindByPrimaryKey, Supplier.POEnt.ejbFindByPrimaryKey, Supplier.POLineEnt.ejbFindByPrimaryKey, Mfg.Component.ejbFindByPrimaryKey, Mfg.InventoryEnt.ejbFindByPrimaryKey reads Mfg.ComponentEnt, Mfg.InventoryEnt, Supplier.SComponent, Supplier.POEnt, Supplier.POLineEnt creates Mfg.InventoryEnt updates Supplier.SComponent, Supplier.POEnt, Supplier.POLineEnt, Mfg.InventoryEnt end transaction
transaction Orders.CartSes.buy calls Corp.CustomerEnt.ejbFindByPrimaryKey, Corp.DiscountEnt.ejbFindByPrimaryKey, Corp.RuleEnt.ejbFindByPrimaryKey, Orders.ItemEnt.ejbFindByPrimaryKey reads Corp.CustomerEnt, Corp.DiscountEnt, Corp.RuleEnt, Mfg.LargeOrderEnt, Orders.ItemEnt creates Mfg.LargeOrderEnt, Orders.OrderEnt, Orders.OrderLineEnt end transaction
transaction Orders.OrderCustomerSes.addCustomer calls Corp.RuleEnt.ejbFindByPrimaryKey, Orders.OrderCustomerEnt.ejbFindByPrimaryKey creates Corp.CustomerEnt, Orders.OrderCustomerEnt end transaction
transaction Orders.OrderCustomerSes.validateCustomer calls Orders.OrderCustomerEnt.ejbFindByPrimaryKey end transaction
transaction Orders.OrderSes.cancelOrder calls Orders.OrderEnt.ejbFindByPrimaryKey, Orders.OrderLineEnt.ejbFindByOrder reads Orders.OrderEnt, Orders.OrderLineEnt deletes Orders.OrderEnt, Orders.OrderLineEnt end transaction
transaction Orders.OrderSes.changeOrder calls Corp.CustomerEnt.ejbFindByPrimaryKey, Corp.DiscountEnt.ejbFindByPrimaryKey, Corp.RuleEnt.ejbFindByPrimaryKey, Orders.ItemEnt.ejbFindByPrimaryKey, Orders.OrderEnt.ejbFindByPrimaryKey, Orders.OrderLineEnt.ejbFindByOrderAndItem reads Corp.CustomerEnt, Corp.DiscountEnt, Corp.RuleEnt, Orders.ItemEnt, Orders.OrderEnt, Orders.OrderLineEnt creates Orders.OrderLineEnt updates Orders.OrderEnt, Orders.OrderLineEnt deletes Orders.OrderLineEnt end transaction
transaction Orders.OrderSes.getCustomerStatus calls Orders.OrderEnt.ejbFindByCustomer, Orders.OrderLineEnt.ejbFindByOrder reads Orders.OrderEnt, Orders.OrderLineEnt end transaction
transaction Orders.OrderSes.getOrderStatus calls Orders.OrderEnt.ejbFindByPrimaryKey, Orders.OrderLineEnt.ejbFindByOrder reads Orders.OrderEnt, Orders.OrderLineEnt end transaction
transaction Orders.OrderSes.newOrder calls Corp.CustomerEnt.ejbFindByPrimaryKey, Corp.DiscountEnt.ejbFindByPrimaryKey, Corp.RuleEnt.ejbFindByPrimaryKey, Orders.ItemEnt.ejbFindByPrimaryKey reads Corp.CustomerEnt, Corp.DiscountEnt, Corp.RuleEnt, Mfg.LargeOrderEnt, Orders.ItemEnt creates Mfg.LargeOrderEnt, Orders.OrderEnt, Orders.OrderLineEnt end transaction
transaction Supplier.POEnt.ejbCreate creates Supplier.POEnt, Supplier.POLineEnt end transaction
transaction Util.SequenceEnt.nextSequenceBlock reads Util.SequenceEnt updates Util.SequenceEnt end transaction
This appendix contains examples of systems for the different SPECjAppServer2002 centralized workload categories. These categories are defined in section 3.5.3 of this document. This is not an all inclusive list, but rather an example of what sort of systems fall into different categories.
Configurations allowed:
Configurations not allowed:
Configurations allowed:
Configurations not allowed:
Configurations allowed:
Configurations not allowed:
The Subcommittee will use the following processes for handling challenges related to pricing.
Anyone may challenge the supplied pricing information for any result that has been published on SPEC's website and for which all components are listed as generally available during the first 90 days after the later of either the publication date or the general availability date for that result.
The burden of proof that the supplied pricing information is inaccurate is on the member who is challenging the result. This proof must be documented in writing or email and sent to the result's submitter and the subcommittee. Note: To facilitate resolution of the challenge, the challenger may as a courtesy inform the submitter of the challenge prior to notifying the subcommittee.
The submitter is expected to acknowledge receipt of the challenge within 5 working days once the submitter and the subcommittee are notified.
The subcommittee will review materials from the challenging and challenged parties and vote on a proposal to undertake a re-review of the challenged pricing information. If the proposal passes, the subcommittee will undertake the re-review and determine if original pricing is inaccurate.
Prior to the start of the re-review, the submitter may acknowledge the validity of the challenge by re-submitting with corrected pricing. The re-submission will be handled in the next available review cycle.
If the original pricing is declared inaccurate by either the admission of the submitter or the decision of the subcommittee, then the original result will be marked Not-Available (refer to Section 2.3.3 of the OSG Policy Document) and the reason for change will be noted on the page and the submitter may request a note on the resolution be added as well.
The supplied pricing information may also be challenged during the review phase if all components are listed as generally available at that time. The process and requirements are the same as shown above. The result in review will not be published until the subcommittee has resolved any open issues.
If the submitter of the result in question wishes to appeal the subcommittee's decision to the OSSC, they may do so. However, the OSSC may choose to employ an Independent third party to verify the current pricing from the submitted pricing disclosure. The independent third party selected by the OSSC may be selected from available Independent Public Accountants, consulting firms specializing in benchmark audits, or other parties not directly affiliated with the submitter or their competitors. The OSSC, at its discretion, may choose to require that the submitter pay SPEC for the costs involved in the audit.