SPECjbb2005

Frequently Asked Questions

Version 1.04
Last modified: March 15, 2006


Q1:

What is SPECjbb2005?

Q2:

What are the differences between SPECjbb2000 and SPECjbb2005?

Q3:

Does SPECjbb2005 replace SPECjbb2000?

Q4:

How does SPECjbb2005 relate to SPECjvm98 and SPECjAppServer2004?

Q5:

What is the performance metric for SPECjbb2005?

Q6:

Why are multiple instances allowed to be run as part of the SPECjbb2005 benchmark?

Q7:

Why are there two metrics for SPECjbb2005?

Q8:

Does SPECjbb2005 have calls to System.gc()?

Q9:

Why does the calculation of the metrics depend on the expected peak rather than the real peak?

Q10:

When the number of JVM instances is greater than 1 can I use affinity to associate instances to nodes or processors on the system?

Q11:

Where can I find published results for SPECjbb2005?

Q12:

Who developed SPECjbb2005?

Q13:

How do I obtain the SPECjbb2005 benchmark?

Q14

How much does the SPECjbb2005 benchmark cost?

Q15:

How can I publish SPECjbb2005 results?

Q16

How much does it cost to publish results? 

Q17

Where do I find answers to questions about running the benchmark?

Q18:

Where can I go for more information?

Q19:

Can I compare SPECjbb2005 results with SPECjbb2000 results?

Q20:

Can I compare SPECjbb2005 results to SPECjAppServer2004?

Q21:

Can I compare SPECjbb2005 results with TPC-C results?

Q22

Can I compare SPECjbb2005 results to results from other SPEC benchmarks?

Q23:

Do you permit benchmark results to be estimated or extrapolated from existing results?

Q24:

What does SPECjbb2005 test?

Q25:

What are the significant influences on the performance of the SPECjbb2005 benchmark?

Q26:

What is the benchmark workload?

Q27:

Can I use SPECjbb2005 to determine the size of the server I need?

Q28:

What hardware is required to run the benchmark?

Q29:

What is the minimum configuration necessary to test this benchmark?

Q30:

What software is required to run the benchmark?

Q31:

How many different OS and hardware configurations tested on?

Q32:

Do you provide source code for the benchmark?

Q33:

How scalable is the benchmark?

Q34:

Can I report with vendor A hardware, vendor B OS, and vendor C JRE?

Q35:

Can I report results for public domain software?

Q36:

Are the results independently audited?

Q37:

Can I announce my results before they are reviewed by the SPEC Java subcommittee?


 

Q1:

What is SPECjbb2005?

A1:

SPECjbb2005 is an industry-standard benchmark designed to measure the server-side performance of Java runtime environments. It emulates a 3-tier system, the most common type of server-side Java application today. Business logic and object manipulation, the work of the middle tier, predominate; clients are replaced by user threads, and database storage by Java collections. The benchmark steps through increasing amounts of work, providing a graphical view of scalability. For further information see the SPECjbb2005 Design Document.


Q2:

What are the differences between SPECjbb2000 and SPECjbb2005?

A2:

There are several changes made to SPECjbb2000 that have created SPECjbb2005.  The database is now modeled, not as a BTree structure implemented in code in the benchmark itself, but in structures that are implemented as HashMaps, or TreeMaps in cases where some operations on the table requires sorting. The intention is for the benchmark to reflect the practice of Java developers to use libraries where they provide appropriate functionality, rather than to code implementations of their own. SPECjbb2005 now includes no System.GC calls in the main part of the benchmark  The intention is to reflect the behavior of long-running applications not interrupted by periodic System.GC calls. In order to better match current application characteristics, the handling of all financial data and calculations was changed from float to BigDecimal. This matches current industry practice, and ensures that the financial amounts and calculations have correct decimal representation and rounding expected, and sometimes legally mandated, for currency calculations. The code was refactored in a number of places to better reflect object-oriented styles of programming. The current Java level is now Java 5.0, and the benchmark includes several features from that language level.  Several Collection data structures have been made generic, and the source code contains several uses of auto-boxing.  There are Enumeration types in the code The transaction logging is now done by building and writing DOM objects using the JAXP XML functionality of Java 5.0. The benchmark may also deploy several instances of the Java Runtime Environment (JRE), each independently handling the transaction load on its own data tables. 


Q3:

Does SPECjbb2005 replace SPECjbb2000?

A3:

Yes. SPEC is providing a six month transition period from the date of the SPECjbb2005 release. During this period, SPEC will accept, review and publish results from both benchmark versions. After this period, results from SPECjbb2000 will no longer be accepted by SPEC for publication. 


Q4:

How does SPECjbb2005 relate to SPECjvm98 and SPECjAppServer2004?

A4:

SPECjvm98 allows users to evaluate performance for the combined hardware and software aspects of the Java Virtual Machine (JVM) client platform. On the software side, it measures the efficiency of JVM, the just-in-time (JIT) compiler, and operating system implementations. On the hardware side, it includes CPU (integer and floating-point), cache, memory, and other platform-specific performance.  SPECjAppServer2004 is designed to measure the performance of J2EE 1.3 application servers.  Like SPECjbb2000, SPECjbb2005 is a benchmark for evaluating the performance of servers running typical Java business applications.  SPECjbb2005 represents an order processing application for a wholesale supplier. The benchmark can be used to evaluate performance of hardware and software aspects of Java Virtual Machine (JVM) servers.


Q5:

What are the performance metrics for SPECjbb2005?

A5:

The performance metrics for SPECjbb2005 are SPECjbb2005 bops (business operations per second), obtained by averaging the total transaction rate in a SPECjbb2005 run from the expected peak number of warehouses, to twice the peak number of warehouses (for details consult the RunRules.html), and SPECjbb2005 bops/JVM, obtained by dividing the SPECjbb2005 bops metric by the number of JVMs deployed in the run.


Q6:

Why are multiple instances allowed to be run as part of the SPECjbb2005 benchmark?

A6:

It is a common deployment model to run multiple JVMs on a single server system and SPECjbb2005 now supports this deployment model.


Q7:

Why are there two metrics for SPECjbb2005?

A7:

The SPECjbb2005 bops metric measures the overall throughput achieved by all the JVMs in a benchmark run. SPECjbb2005 bops/JVM reflects the contribution of a single JVM to the overall metric, and is thus a measure of the performance and scaling of a single JVM.


Q8:

Does SPECjbb2005 have calls to System.gc?

A8:

No. Java applications that run in a steady state do not typically include System.gc() calls and SPECjbb2005 reflects this behavior. The garbage collection strategy of the JVM must be suitable for a long-running application with no explicitly inserted garbage collection pauses.


Q9:

Why does the calculation of the metrics depend on the expected peak rather than the real peak?

A9:

With systems being developed on which the expected peak gets larger and larger, and especially with no System.gc() calls outside the measurement periods, there is more variability in the number of warehouses at which the peak of a SPECjbb2005 run may occur on any given system. To allow greater predictability, the SPECjbb2005 bops metric is now calculated using the expected peak.


Q10:

When the number of JVM instances is greater than 1 can I use affinity to associate instances to nodes or processors on the system?

A10

Yes. Appropriate editing of the run_multi script will allow such adjustments to be made.


Q11:

Where can I find published results for SPECjbb2005?

A11

SPECjbb2005 results are available on SPEC's web site: http://www.spec.org/jbb2005/results/


Q12

Who developed SPECjbb2005?

A12:

SPECjbb2005 was developed by the Java subcommittee's core design team. BEA, Darmstadt University of Technology, HP, IBM, Intel, and Sun participated in design, implementation and testing phases of the product.


Q13:

How do I obtain the SPECjbb2005 benchmark?

A13:

To place an order, use the on-line order form or contact SPEC at http://www.spec.org/spec/contact.html.


Q14

How much does the SPECjbb2005 benchmark cost?

A14

Current pricing for all the SPEC benchmarks is available from the SPEC on-line order form. SPEC members receive the benchmark at no extra charge.


Q15

How can I publish SPECjbb2005 results?

A15

You need to get a SPECjbb2005 license in order to publish results.  For more information about submitting results, please contact SPEC.  


Q16:

How much does it cost to publish results?

A16:

Contact SPEC at http://www.spec.org/spec/contact.html to learn the current cost to publish SPECjbb2005 results. SPEC members can submit results free of charge.


Q17:

Where do I find answers to questions about running the benchmark?

A17:

The procedures for installing and running the benchmark are contained in the UserGuide.html


Q18

Where can I go for more information?

A18:

SPECjbb2005 documentation consists mainly of four documents: User's Guide, Design Document, Run and Reporting Rules, and this FAQ. The documents can be found in the benchmark kit or on the SPECjbb2005 Web site http://www.spec.org/jbb2005/


Q19:

Can I compare SPECjbb2005 results with SPECjbb2000 results?

A19:

No.  The benchmarks have too many differences to be comparable.


Q20:

Can I compare SPECjbb2005 results with SPECjAppServer2004 or SPECjvm98 results?

A20:

No. The benchmarks are not comparable.


Q21:

Can I compare SPECjbb2005 results with TPC-C results?

A21:

No. SPECjbb2005 uses totally different data-set sizes and workload mixes, has a different set of run and reporting rules, a different measure of throughput, and different metrics.


Q22:

Can I compare SPECjbb2005 results to results from other SPEC benchmarks?

A22:

No. There is no logical way to translate results from one benchmark to another.


Q23:

Do you permit benchmark results to be estimated or extrapolated from existing results?

A23:

No. This is an implementation benchmark and all the published results have been achieved by the submitter and reviewed by the committee. Extrapolations of results cannot be accurately achieved due to the complexity of the benchmark.


Q24:

What does SPECjbb2005 test?

A24:

SPECjbb2005 is designed to test the performance of a representative Java server application, including all aspects of the application environment, e.g., H/W, OS, Java runtime environment.  There is no measured network or disk IO.


Q25:

What are significant influences on the performance of the SPECjbb2005 benchmark?

A25:

The most significant influences on the performance of the benchmark are:

  • the number of processors and processor characteristics

  • memory subsystem

  • the operating system capabilities

  • the Java runtime environment

  • address space support (32-bit vs. 64-bit)


Q26:

What is the benchmark workload?

A26:

The benchmark simulates a wholesaling operation, receiving orders, managing deliveries, and generating reports of various sorts; the database is replaced by in-memory Java Collection objects, and transaction logging is implemented using XML.


Q27:

Can I use SPECjbb2005 to determine the size of the server I need?

A27:

SPECjbb2005 should not be used to size a Java server configuration, because it is based on a specific workload. There are numerous assumptions made about the workload, which might or might not apply to other user applications. Also, all the operations that would be database operations are memory operations in SPECjbb2005.  SPECjbb2005 is a tool that provides a level playing field for comparing JVM products in a server environment.


Q28:

What hardware is required to run the benchmark?

A28:

A single shared-address system is required to run SPECjbb2005.


Q29:

What is the minimum configuration necessary to test this benchmark?

A29:

This benchmark has run up to 8 warehouses on a laptop with a 1.7 Ghz Pentium M Processor and 1 Gigabyte of memory, using a 512 megabyte heap.


Q30:

What software is required to run the benchmark?

A30:

SPECjbb2005 requires only the operating system and a Java Virtual Machine (JVM) supporting J2SE 5.0 features.


Q31:

How many different HW, OS, and JVM configurations have you tested?

A31:

All major HW systems, operating systems and JVMs have been tested.


Q32:

Do you provide source code for the benchmark?

A32:

Yes, but you are required to run the files provided with the benchmark if you are publishing results. Modifying the source code and recompiling are not allowed. Specific items (the load program, for example) can be modified to start up the JVMs. Areas where you are allowed to make changes are listed in the RunRules.html Any permitted changes made must be disclosed in the submission file when submitting results.


Q33:

How scalable is the benchmark?

A33:

The application code of the benchmark was written as a highly scalable parallel application. How well it scales in a particular configuration depends largely on the capabilities of the underlying hardware and software components.


Q34:

Can I report with vendor A hardware, vendor B OS, and vendor C JRE?

A34:

The SPECjbb2005 run rules do not preclude third-party submission of benchmark results, but result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor.


Q35:

Can I report results for public domain software?

A35:

Yes, as long as the product satisfies the RunRules.html


Q36:

Are the results independently audited?

A36:

No, but they are subject to committee review. 


Q37:

Can I announce my results without a review by the SPEC Java subcommittee?

A37:

Yes, unless the input.expected_peak_warehouse property has been overridden in the SPECjbb.props file, in which case the result must be submitted to the SPEC Java Subcommittee before any public announcement of the result. In the case of an announced result that has not been reviewed, the full disclosure report for the results is subject to review by the subcommittee and must be made available on request.  However, in order to publish the results on the SPECjbb2005 site, they must be submitted to the SPEC Java subcommittee for review. See the Run Rules for details.




Java, J2SE are trademarks of Sun Microsystems, Inc.

TPC-C is a trademark of the Transaction Processing Performance Council.


Copyright (c) 2005 Standard Performance Evaluation Corporation