SPECjAppServer2004
Frequently Asked Questions

Version 1.02
Last modified: March 7, 2006


Q1: What is SPECjAppServer2004?
Q2: You just released SPECjAppServer2001 and SPECjAppServer2002 last year. Why are you releasing another SPECjAppServer benchmark so soon?
Q3: Historically, SPEC creates a new version of a benchmark every 3 to 4 years, providing a large number of published results to compare. By releasing new SPECjAppServer benchmarks so frequently, you are making it difficult to do trend studies. Can you tell us the shelf life of this benchmark?
Q4: How is SPECjAppServer2004 different from SPECjAppServer2002?
Q5: Does this benchmark replace SPECjAppServer2001 and SPECjAppServer2002?
Q6: Does this benchmark make SPECjvm98 or SPECjbb2000 obsolete?
Q7: What is the performance metric for SPECjAppServer2004?
Q8: Where can I find published results for SPECjAppServer2004?
Q9: Who developed SPECjAppServer2004?


Q10: How do I obtain the SPECjAppServer2004 benchmark?
Q11: How much does the SPECjAppServer2004 benchmark cost?
Q12: How can I publish SPECjAppServer2004 results?
Q13: How much does it cost to publish results?
Q14: Where do I find answers to questions about running the benchmark?
Q15: Where can I go for more information?


Q16: SPECjAppServer2001 and SPECjAppServer2002 both had a price/performance metric. Why doesn't SPECjAppServer2004 have one?
Q17: Although there is no price/performance metric, you provide a BOM for reproducing results. Can I create my own price/performance metric and report it alongside SPEC's published results?
Q18: Can I compare SPECjAppServer2004 results with SPECjAppServer2001 or SPECjAppServer2002 results?
Q19: Can I compare SPECjAppServer2004 results with TPC-C results or TPC-W results?
Q20: Can I compare SPECjAppServer2004 results to results from other SPEC benchmarks?
Q21: Can I compare SPECjAppServer2004 results in different categories?
Q22: Do you permit benchmark results to be estimated or extrapolated from existing results?


Q23: What does SPECjAppServer2004 test?
Q24: What are the significant influences on the performance of the SPECjAppServer2004 benchmark?
Q25: Does this benchmark aim to stress the J2EE application server or the database server?
Q26: What is the benchmark workload?
Q27: Can I use SPECjAppServer2004 to determine the size of the server I need?
Q28: What hardware is required to run the benchmark?
Q29: What is the minimum configuration necessary to test this benchmark?
Q30: What software is required to run the benchmark?
Q31: How many different OS, application server and DB configurations have you tested with medium to large configurations?
Q32: Do you provide source code for the benchmark?


Q33: Is there a web layer in the SPECjAppServer2004 benchmark?
Q34: Why didn't you address SSL (secure socket layer) in this benchmark?
Q35: Can I use J2EE 1.4 products to run this benchmark?
Q36: Why do you insist on J2EE products with CTS certification? Do you and or any certifying body validate this?
Q37: Can I report results on a large partitioned system?
Q38: Is the benchmark cluster-scalable?
Q39: How scalable is the benchmark?
Q40: How well does the benchmark scale in both scale-up and scale-out configurations?
Q41: Can I report with vendor A hardware, vendor B J2EE Application Server, and vendor C database software?
Q42: Can I use Microsoft SQL Server for the database?
Q43: Can I report results for public domain software?
Q44: Are the results independently audited?
Q45: Can I announce my results before they are reviewed by the SPEC Java subcommittee?


Q46: How realistic is the DB size for medium and large configurations?
Q47: Can you describe the DB contents? Do you have jpegs or gifs of cars, or any dynamic content such as pop-ups or promotional items?
Q48: If the size of the DB is very small, almost all of it can be cached. Is this realistic?
Q49: What is typically the ratio of read vs. write/update operations on the DB?
Q50: Why didn't you select several DB sizes, like those in TPC-H and TPC-W?
Q51: In this benchmark, the size of the DB is a step function of the IR. This makes it difficult to compare beyond each step -- between configuration reporting with IR=50 and IR=65, for example, as both of them have a different-sized database. Wouldn't it be more fair to compare the same-sized DB?
Q52: I have heard that DB performance had a significant influence on previous SPECjAppServer benchmarks. What have you done to reduce this influence?
Q53: Assuming a similar hardware configuration, what would be a typical ratio of application server CPUs to DB server CPUs?


Q54: Are results sensitive to components outside of the SUT -- e.g., client driver machines? If they are, how can I report optimal performance for a) fewer powerful driver machines or b) larger number of less powerful driver machines?
Q55: This is an end-to-end solution benchmark. How can I determine where the bottlenecks are? Can you provide a profile or some guidance on tuning issues?
Q56: Is it realistic to use a very large configuration that would eliminate typical garbage collection? How much memory is required to eliminate GC for IR=100, IR=500, and IR=1000?



Q1: What is SPECjAppServer2004?
A1: SPECjAppServer2004 is an industry-standard benchmark designed to measure the performance of application servers conforming to the J2EE 1.3 or later specifications.

Q2: You just released SPECjAppServer2001 and SPECjAppServer2002 last year. Why are you releasing another SPECjAppServer benchmark so soon?
A2: The two previous benchmarks (SPECjAppServer2001 and SPECjAppServer2002) were basically a repackage of the ECperf benchmark, which was designed to meet the J2EE 1.2 standards specification. The design and layout were left basically unchanged in order to get the benchmarks released quickly. SPECjAppServer2004 is an enhanced version of the benchmark that includes a modified workload and more J2EE 1.3 standard capabilities.

Q3: Historically, SPEC creates a new version of a benchmark every 3 to 4 years, providing a large number of published results to compare. By releasing new SPECjAppServer benchmarks so frequently, you are making it difficult to do trend studies. Can you tell us the shelf life of this benchmark?
A3: SPEC intends to keep SPECjAppServer2004 for as long as it can before developing a new benchmark, but it also needs to move the benchmark along as new standards and technologies evolve and old standards and technologies become obsolete. The exact shelf life is not predictable and depends largely on the evolution of the J2EE platform.

Q4: How is SPECjAppServer2004 different from SPECjAppServer2002?
A4: In both SPECjAppServer2001 and SPECjAppServer2002, the load drivers access the application via a direct connection to the EJBs. In the SPECjAppServer2004 benchmark, the load drivers access the application through the web layer (for the dealer domain) and the EJBs (for the manufacturing domain) to stress more of the capabilities of the J2EE application servers. In addition, SPECjAppServer2004 adds extensive use of the JMS and MDB infrastructure.

Q5: Does this benchmark replace SPECjAppServer2001 and SPECjAppServer2002?
A5: Yes. SPEC is providing a six month transition period from the date of the SPECjAppServer2004 release. During this period, SPEC will accept, review and publish results from all three benchmark versions. After this period, results from SPECjAppServer2001 and SPECjAppServer2002 will no longer be accepted by SPEC for publication.

Q6: Does this benchmark make SPECjvm98 or SPECjbb2000 obsolete?
A6: No. SPECjvm98 is a client JVM benchmark. SPECjbb2000 is a server JVM benchmark. SPECjAppServer2004 is a J2EE application server benchmark.

Q7: What is the performance metric for SPECjAppServer2004?
A7: The performance metric is jAppServer Operations Per Second ("SPECjAppServer2004 JOPS"). This is calculated by adding the metrics of the dealership management application in the dealer domain and the manufacturing application in the manufacturing domain as:
SPECjAppServer2004 JOPS = Dealer Transactions/sec + Workorders/sec

Q8: Where can I find published results for SPECjAppServer2004?
A8: SPECjAppServer2004 results are available on SPEC's web site: http://www.spec.org/.

Q9: Who developed SPECjAppServer2004?
A9: SPECjAppServer2004 was developed by the Java subcommittee's core design team. BEA, Borland, Darmstadt University of Technology, HP, IBM, Intel, Oracle, Pramati, Sun and Sybase participated in design, implementation and testing phases of the product. SPECjAppServer2004 is not a refresh of the older SPECjAppServer (01,02) benchmarks. In addition to the EJB tier exercised in older SPECjAppServer benchmarks, SPECjAppServer2004 also extensively exercises the web tier and the messaging infrastructure.

Q10: How do I obtain the SPECjAppServer2004 benchmark?
A10: To place an order, use the on-line order form or contact SPEC at http://www.spec.org/spec/contact.html.

Q11: How much does the SPECjAppServer2004 benchmark cost?
A11: Current pricing for all the SPEC benchmarks is available from the SPEC on-line order form. SPEC members receive the benchmark at no extra charge.

Q12: How can I publish SPECjAppServer2004 results?
A12: You need to get a SPECjAppServer2004 license in order to publish results. All results are subject to a review by SPEC prior to publication.

For more information about submitting results, please contact SPEC.

Q13: How much does it cost to publish results?
A13: Contact SPEC at http://www.spec.org/spec/contact.html to learn the current cost to publish SPECjAppServer2004 results. SPEC members can submit results free of charge.

Q14: Where do I find answers to questions about running the benchmark?
A14: The procedures for installing and running the benchmark are contained in the SPECjAppServer2004 User's Guide, which is included in the product kit and is also available from the SPEC web site.

Q15: Where can I go for more information?
A15: SPECjAppServer2004 documentation consists mainly of four documents: User's Guide,Design Document, Run and Reporting Rules, and this FAQ. The documents can be found in the benchmark kit or on SPEC's Web site: http://www.spec.org/.

Q16: SPECjAppServer2001 and SPECjAppServer2002 both had a price/performance metric. Why doesn't SPECjAppServer2004 have one?
A16: SPECjAppServer2001 and SPECjAppServer2002 were the first benchmarks released by SPEC that contained a price/performance metric. They were released for a year with the price/performance metric as an experiment so that SPEC could determine if the benefit of this metric was worth the costs involved. SPEC OSSC (Open Systems Steering Committee) reviewed the arguments for and against the price/performance metric and voted to remove it on new benchmarks.

Q17: Although there is no price/performance metric, you provide a BOM for reproducing results. Can I create my own price/performance metric and report it alongside SPEC's published results?
A17: SPEC does not endorse any price/performance metric for the SPECjAppServer2004 benchmark. Whether vendors or other parties can use the performance data to establish and publish their own price/performance information is beyond the scope and jurisdiction of SPEC. Note that the benchmark run rules do not prohibit the use of $/"SPECjAppServer2004 JOPS" calculated from pricing obtained using the BOM.

Q18: Can I compare SPECjAppServer2004 results with SPECjAppServer2001 or SPECjAppServer2002 results?
A18: No. The benchmarks are not comparable because the workload was changed in SPECjAppServer2004 to test more of the J2EE 1.3 capabilities of the J2EE application servers.

Q19: Can I compare SPECjAppServer2004 results with TPC-C results or TPC-W results?
A19: No. SPECjAppServer2004 uses totally different data-set sizes and workload mixes, has a different set of run and reporting rules, a different measure of throughput, and different metrics.

Q20: Can I compare SPECjAppServer2004 results to results from other SPEC benchmarks?
A20: No. There is no logical way to translate results from one benchmark to another.

Q21: Can I compare SPECjAppServer2004 results in different categories?
A21: No. Results between standard and distributed categories of SPECjAppServer2004 cannot be compared; any public claims that attempt to compare categories will be considered a violation of SPEC fair use guidelines.

Q22: Do you permit benchmark results to be estimated or extrapolated from existing results?
A22: No. This is an implementation benchmark and all the published results have been achieved by the submitter and reviewed by the committee. Extrapolations of results cannot be accurately achieved due to the complexity of the benchmark.

Q23: What does SPECjAppServer2004 test?
A23: SPECjAppserver 2004 is designed to test the performance of a representative J2EE application and each of the components that make up the application environment, e.g., H/W, application server, JVM, database.

See section 1.1 of the SPECjAppServer2004 Design Document for more information.

Q24: What are the significant influences on the performance of the SPECjAppServer2004 benchmark?
A24: The most significant influences on the performance of the benchmark are:
  • the hardware configuration
  • the J2EE application server software
  • the JVM software
  • the database software
  • JDBC drivers
  • network performance

Q25: Does this benchmark aim to stress the J2EE application server or the database server?
A25: This benchmark was designed to stress the J2EE application server. But, since this is a solutions-based benchmark, other components (such as the database server) are stressed as well.

Q26: What is the benchmark workload?
A26: The benchmark emulates an automobile dealership, manufacturing, supply chain management (SCM) and order/inventory system. For additional details see the SPECjAppServer2004 Design Document.

Q27: Can I use SPECjAppServer2004 to determine the size of the server I need?
A27: SPECjAppServer2004 should not be used to size a J2EE 1.3 application server configuration, because it is based on a specific workload. There are numerous assumptions made about the workload, which might or might not apply to other user applications. SPECjAppServer2004 is a tool that provides a level playing field for comparing J2EE 1.3-compatible application server products.

Q28: What hardware is required to run the benchmark?
A28: In addition to the hardware for the system under test (SUT), one or more client machines are required, as well as the network equipment to connect the clients to the SUT. The number and size of client machines required by the benchmark will depend on the injection rate to be applied to the workload.

Q29: What is the minimum configuration necessary to test this benchmark?
A29: A SPEC member has run the benchmark on a Pentium 4 1.6GHz laptop system with 512MB of RAM and a 30GB hard drive. The benchmark completed successfully with an injection rate of 2. This is not a valid configuration that you can use to report results, however, as it does not meet the durability requirements of the benchmark.

Q30: What software is required to run the benchmark?
A30: In addition to the operating system and the Java Virtual Machine (JVM), SPECjAppServer2004 requires a J2EE 1.3-compatible application server, a relational database server, and a JDBC driver.

Q31: How many different OS, application server and DB configurations have you tested with medium to large configurations?
A31: All major operating systems, application servers, and databases have been tested.

Q32: Do you provide source code for the benchmark?
A32: Yes, but you are required to run the files provided with the benchmark if you are publishing results. As a general rule, modifying the source code is not allowed. Specific items (the load program, for example) can be modified to port the application to your environment. Areas where you are allowed to make changes are listed in the SPECjAppServer2004 Run and Reporting Rules. Any changes made must be disclosed in the submission file when submitting results.

Q33: Is there a web layer in the SPECjAppServer2004 benchmark?
A33: Yes. The dealer domain is accessed through the web layer by the load driver when running the benchmark.

Q34: Why didn't you address SSL (secure socket layer) in this benchmark?
A34: SPECjAppServer2004 focuses on the major services provided by the J2EE 1.3 platform that are employed in today's applications. SSL is addressed separately in the SPECweb99_SSL benchmark.

Q35: Can I use J2EE 1.4 products to run this benchmark?
A35: Yes. Any product conforming to the J2EE 1.3 or later specifications can be used to run this benchmark.

Q36: Why do you insist on J2EE products with CTS certification? Do you and or any certifying body validate this?
A36: CTS certification ensures that the application server being tested is a J2EE technology-based application server and not a benchmark-special application server that is crafted specifically for SPECjAppServer2004. The CTS certification is validated by Sun Microsystems, Inc.

Q37: Can I report results on a large partitioned system?
A37: Yes.

Q38: Is the benchmark cluster-scalable?
A38: Yes.

Q39: How scalable is the benchmark?
A39: In our initial tests we have seen good scalability with three 4-CPU systems (two systems for the J2EE application server and one system for the database server). SPEC did not explicitly restrict scalability in the benchmark.

Q40: How well does the benchmark scale in both scale-up and scale-out configurations?
A40: SPECjAppServer2004 has been designed and tested with both scale-up and scale-out configurations. The design of the benchmark does not limit the scaling in either way. How well it scales in a particular configuration depends largely on the capabilities of the underlying hardware and software components.

Q41: Can I report with vendor A hardware, vendor B J2EE Application Server, and vendor C database software?
A41: The SPECjAppServer2004 Run and Reporting Rules do not preclude third-party submission of benchmark results, but result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor.

Q42: Can I use Microsoft SQL Server for the database?
A42: Yes. You can use any relational database that is accessible by JDBC and satisfies the SPECjAppServer2004 Run and Reporting Rules.

Q43: Can I report results for public domain software?
A43: Yes, as long as the product satisfies the SPECjAppServer2004 Run and Reporting Rules.

Q44: Are the results independently audited?
A44: No, but they are subject to committee review prior to publication.

Q45: Can I announce my results before they are reviewed by the SPEC Java subcommittee?
A45: No.

Q46: How realistic is the DB size for medium and large configurations?
A46: The following table shows the approximate raw data size used to load the database for different benchmark injection rates:

IR Size
100 430MB
500 2.1GB
1000 4.2GB


Actual storage space consumed by the RDBMS and all the supporting structures (i.e. indices) is far higher, however. It is not unreasonable, for example, to have the database consuming 5GB of disk space to support runs at IR=100. There are a large number of factors -- both RDBMS- and configuration-dependent -- that influence the actual disk space required.


Q47: Can you describe the DB contents? Do you have jpegs or gifs of cars, or any dynamic content such as pop-ups or promotional items?
A47: The DB comprises text and numeric data. We do not include jpegs or gifs as these are better served by static web content than in the DB. We do not include dynamic content as this represents web content and is usually not part of general DB usage. The client-side processing of such content is not measured in SPECjAppServer2004.

Q48: If the size of the DB is very small, almost all of it can be cached. Is this realistic?
A48: We have significantly increased the database size in SPECjAppServer2004. While still relatively small, the chances of caching the whole database in memory have been significantly reduced. Since SPECjAppServer2004 focuses on evaluating application server performance, a small but reasonably sized database seems to be far more appropriate than using database sizes equivalent to the ones used in pure database benchmarks.

Q49: What is typically the ratio of read vs. write/update operations on the DB?
A49: An exact answer to this question is not possible, because it depends on several factors, including the injection rate and the application server and database products being used. Lab measurements with a specific application and database server at an injection rate of 80 have shown a database read vs. write/update ratio of approximately 4. Your mileage may vary.

Q50: Why didn't you select several DB sizes, like those in TPC-H and TPC-W?
A50: The size of the database data scales stepwise, corresponding to the injection rate for the benchmark. Multiple scaling factors for database loading would add another different category. Since we are trying to measure application server performance, it is best to keep the database scaling linear for all submissions.

Q51: In this benchmark, the size of the DB is a step function of the IR. This makes it difficult to compare beyond each step -- between configuration reporting with IR=50 and IR=65, for example, as both of them have a different-sized database. Wouldn't it be more fair to compare the same-sized DB?
A51: No. As we increase the load on the application server infrastructure, it is realistic to increase the size of the database as well. Typically, larger organizations have a higher number of transactions and larger databases. Both the load injection and the larger database will put more pressure on the application server infrastructure. This will ensure that at a higher IR the application server infrastructure will perform more work than at a lower IR, making the results truly comparable.

Q52: I have heard that DB performance had a significant influence on previous SPECjAppServer benchmarks. What have you done to reduce this influence?
A52: In SPECjAppServer2004, a significant amount of functionality has been incorporated into the application server layer (e.g., servlets, JSPs, JMS). As a result, the influence of the database relative to the application server has been reduced somewhat. In addition, the scaling of the database has been increased, which results in a more realistic configuration with reduced table/row contention. The database continues to be a key component of the benchmark, however, since it is representative of a typical J2EE application. Because of this fact, database configuration and tuning will continue to be very important for performance.

Q53: Assuming a similar hardware configuration, what would be a typical ratio of application server CPUs to DB server CPUs?
A53: This question cannot be answered accurately. We have seen vastly different ratios depending on the type and configuration of the application server, database server, and even the JDBC driver.

Q54: Are results sensitive to components outside of the SUT -- e.g., client driver machines? If they are, how can I report optimal performance for a) fewer powerful driver machines or b) larger number of less powerful driver machines?
A54: SPECjAppServer2004 results are not that sensitive to the type of client driver machines, as long as they are powerful enough to drive the workload for the given injection rate. Experience shows that if the client machines are overly stressed, one cannot reach the throughput required for the given injection rate.

Q55: This is an end-to-end solution benchmark. How can I determine where the bottlenecks are? Can you provide a profile or some guidance on tuning issues?
A55: Unfortunately, every combination of hardware, software, and any specific configuration poses a different set of bottlenecks. It would be difficult or impossible to provide tuning guidance based on such a broad range of components and configurations. As we narrow down to a set of products and configurations, such guidelines are more and more possible. Please contact the respective software and/or hardware vendors for tuning guidance using their products.

Q56: Is it realistic to use a very large configuration that would eliminate typical garbage collection? How much memory is required to eliminate GC for IR=100, IR=500, and IR=1000?
A56: Section 2.9.1 of the Run Rules state that the steady state period must be representative of a 24-hour run. This means that if no garbage collection is done during the steady state, it shouldn't be done during an equivalent 24-hour run. Due to the complexity of the benchmark and the amount of garbage it generates, it is unrealistic to configure a setup to run for 24 hours without any GC. And even if it is possible, such memory requirements have not yet been established and will vary according to many factors.



Java, J2EE and ECperf are trademarks of Sun Microsystems, Inc.

TPC-C and TPC-W are trademarks of the Transaction Processing Performance Council.

SQL Server is a trademark of Microsoft Corp.


Copyright (c) 2004 Standard Performance Evaluation Corporation