SPECjEnterprise2010 Frequently Asked Questions

(Updated: May 3, 2022)


Q1: What is SPECjEnterprise2010?

A1: SPECjEnterprise2010 is an industry-standard benchmark designed to measure the performance of application servers conforming to the Java EE 5.0 or later specifications.

Q2: You released SPECjEnterprise2010 predecessors five years ago. Why is SPEC releasing this new version?

A2: The previous benchmark SPECjAppServer2004 enjoyed a long life but has been outpaced by new releases of the Java EE standards levels. SPECjEnterprise2010 is an enhanced version of the benchmark that includes a modified workload and more Java EE 5.0 standard capabilities.

Q3: Historically, SPEC creates a new version of a benchmark every 3 to 4 years, providing a large number of published results to compare. By releasing benchmarks versions so frequently, you are making it difficult to do trend studies. Can you tell us the shelf life of this benchmark?

A3: SPEC intends to keep SPECjEnterprise2010 for as long as it can before developing a new benchmark, but it also needs to move the benchmark along as new standards and technologies evolve and old standards and technologies become obsolete. The exact shelf life is not predictable and depends largely on the evolution of the Java EE platform.

Q4: How is SPECjEnterprise2010 different from SPECjAppServer2004?

A4: SPECjAppServer2004 was a J2EE 1.3 application and used a web layer and EJBs for the clients interactions with the server. In the SPECjEnterprise2010 benchmark, the load drivers access the application through a web layer (for the dealer domain) and through EJBs and Web Services (for the manufacturing domain) to stress more of the capabilities of the Java EE application servers. In addition, SPECjEnterprise2010 adds more extensive use of the JMS and MDB infrastructure.

Q5: Does this benchmark replace SPECjAppServer2004?

A5: Yes. SPEC is providing a 12 month transition period from the date of the SPECjEnterprise2010 release. During this period, SPEC will accept, review and publish results from both benchmark versions. After this period, results from SPECjAppServer2004 will no longer be accepted by SPEC for publication.

Q6: Does this benchmark make SPECjvm2008 or SPECjbb2005 obsolete?

A6: No. SPECjvm2008 is a client JVM benchmark. SPECjbb2005 is a server JVM benchmark. SPECjEnterprise2010 is a Java EE application server benchmark.

Q7: What is the performance metric for SPECjEnterprise2010?

A7: The performance metric is Enterprise jAppServer Operations Per Second ("SPECjEnterprise2010 EjOPS"). This is calculated by adding the metrics of the dealership management application in the dealer domain and the manufacturing application in the manufacturing domain as:

SPECjEnterprise2010 EjOPS = Dealer Transactions/sec + Workorders/sec

Q8: Where can I find published results for SPECjEnterprise2010?

A8: SPECjEnterprise2010 results are available on SPEC's web site http://www.spec.org/jEnterprise2010/results/ .

Q9: Who developed SPECjEnterprise2010?

A9: SPECjEnterprise2010 was developed by the Java subcommittee's core design team. IBM, Intel, Oracle, SAP, and Sun participated in design, implementation and testing phases of the product.

Q10: How do I obtain the SPECjEnterprise2010 benchmark?

A10: To place an order, use the on-line order form http://www.spec.org/order.html or contact SPEC at http://www.spec.org/spec/contact.html

Q11: How much does the SPECjEnterprise2010 benchmark cost?

A11: Current pricing for all the SPEC benchmarks is available from the SPEC on-line order form http://www.spec.org/order.html. SPEC OSG members will receive a complementary benchmark license.

Q12: How can I publish SPECjEnterprise2010 results?

A12: You need to acquire a SPECjEnterprise2010 license in order to publish results. All results are subject to a review by SPEC prior to publication.

For more information see http://www.spec.org/osg/submitting_results.html

Q13: How much does it cost to publish results?

A13: Please see http://www.spec.org/osg/submitting_results.html to learn the current cost to publish SPECjEnterprise2010 results. SPEC OSG members can submit results free of charge

Q14: Where do I find answers to questions about running the benchmark?

A14: The procedures for installing and running the benchmark are contained in the SPECjEnterprise2010 User's Guide, which is included in the product kit and is also available from the SPEC web site http://www.spec.org/jEnterprise2010/

Q15: Where can I go for more information?

A15: SPECjEnterprise2010 documentation consists mainly of four documents: User's Guide, Design Document, Run and Reporting Rules, and this FAQ. The documents can be found in the benchmark kit or on SPEC's Web site: http://www.spec.org/jEnterprise2010/.

Q16: SPECjAppServer2001 and SPECjAppServer2002 both had a price/performance metric. Why doesn't SPECjEnterprise2010 have one?

A16: SPECjAppServer2001 and SPECjAppServer2002 were the first benchmarks released by SPEC that contained a price/performance metric. They were released for a year with the price/performance metric as an experiment so that SPEC could determine if the benefit of this metric was worth the costs involved. SPEC OSSC (Open Systems Steering Committee) reviewed the arguments for and against the price/performance metric and voted to remove it on new benchmarks.

Q17: Although there is no price/performance metric, you provide a BOM for reproducing results. Can I create my own price/performance metric and report it alongside SPEC's published results?

A17: SPEC does not endorse any price/performance metric for the SPECjEnterprise2010 benchmark. Whether vendors or other parties can use the performance data to establish and publish their own price/performance information is beyond the scope and jurisdiction of SPEC. Note that the benchmark run rules do not prohibit the use of $/"SPECjEnterprise2010 JOPS" calculated from pricing obtained using the BOM.

Q18: Can I compare SPECjEnterprise2010 results with SPECjAppServer2001, SPECjAppServer2002 or SPECjAppServer2004 results?

A18: No. The benchmarks are not comparable because the workload was changed in SPECjEnterprise2010 to test more of the Java EE 5.0 capabilities of the Java EE application servers.

Q20: Can I compare SPECjEnterprise2010 results to results from other SPEC benchmarks or benchmark from other consortia?

A20: No. SPECjEnterprise2010 uses totally different data-set sizes and workload mixes, has a different set of run and reporting rules, a different measure of throughput, and different metrics. There is no logical way to translate results from one benchmark to another.

Q22: Do you permit benchmark results to be estimated or extrapolated from existing results?

A22: No. This is an implementation benchmark and all the published results have been achieved by the submitter and reviewed by the committee. Extrapolations of results cannot be accurately achieved due to the complexity of the benchmark.

Q23: What does SPECjEnterprise2010 test?

A23: SPECjEnterprise2010 is designed to test the performance of a representative Java EE application and each of the components that make up the application environment, e.g., H/W, application server, JVM, database.

See section 1.1 of the SPECjEnterprise2010 Design Document for more information.

Q24: What are the significant influences on the performance of the SPECjEnterprise2010 benchmark?

A24: The most significant influences on the performance of the benchmark are:

Q25: Does this benchmark aim to stress the Java EE application server or the database server?

A25: This benchmark was designed to stress the Java EE application server. But, since this is a solutions-based benchmark, other components (such as the database server) are stressed as well.

Q26: What is the benchmark workload?

A26: The benchmark emulates an automobile dealership, manufacturing, supply chain management (SCM) and order/inventory system. For additional details see the SPECjEnterprise2010 Design Document.

Q27: Can I use SPECjEnterprise2010 to determine the size of the server I need?

A27: SPECjEnterprise2010 should not be used to size a Java EE 5.0 application server configuration, because it is based on a specific workload. There are numerous assumptions made about the workload, which might or might not apply to other user applications. SPECjEnterprise2010 is a tool that provides a level playing field for comparing Java EE 5.0-compatible application server products.

Q28: What hardware is required to run the benchmark?

A28: In addition to the hardware for the system under test (SUT), one or more client machines are required, as well as the network equipment to connect the clients to the SUT. The number and size of client machines required by the benchmark will depend on the injection rate to be applied to the workload.

Q29: What is the minimum configuration necessary to test this benchmark?

A29: A SPEC member has run the benchmark on a Pentium Core Duo 2.0GHz laptop system with 1GB of RAM and a 60GB hard drive. The benchmark completed successfully with an injection rate of 5. This is not a valid configuration that you can use to report results, however, as it does not meet the durability requirements of the benchmark.

Q30: What software is required to run the benchmark?

A30: In addition to the operating system and the Java Virtual Machine (JVM), SPECjEnterprise2010 requires a Java EE 5.0-compatible application server, a relational database server, and a JDBC driver.

Q31: How many different OS, application server and DB configurations have you tested with medium to large configurations?

A31: All major operating systems, application servers, and databases have been tested.

Q32: Do you provide source code for the benchmark?

A32: Yes, but you are required to run the files provided with the benchmark if you are publishing results. As a general rule, modifying the source code is not allowed. Specific items (the load program, for example) can be modified to port the application to your environment. Areas where you are allowed to make changes are listed in the SPECjEnterprise2010 Run and Reporting Rules. Any changes made must be disclosed in the submission file when submitting results.

Q33: Is there a web layer in the SPECjEnterprise2010 benchmark?

A33: Yes. The dealer domain is accessed through the web layer by the load driver when running the benchmark.

Q34: Why didn't you address SSL (secure socket layer) in this benchmark?

A34: SPECjEnterprise2010 focuses on the major services provided by the Java EE 5.0 platform that are employed in today's applications. SSL is addressed separately in the SPECweb2009 benchmark.

Q35: Can I use a future Java EE 6 products to run this benchmark?

A35: Yes. Any product conforming to the Java EE 5.0 or later specifications can be used to run this benchmark.

Q36: Why do you insist on Java EE products with CTS certification? Do you and or any certifying body validate this?

A36: CTS certification ensures that the application server being tested is a Java EE technology-based application server and not a benchmark-special application server that is crafted specifically for SPECjEnterprise2010. The CTS certification is validated by Oracle.

Q37: Can I report results on a large partitioned system?

A37: Yes.

Q38: Is the benchmark cluster-scalable?

A38: Yes.

Q39: How scalable is the benchmark?

A39: In our initial tests we have seen good scalability with three 4-CPU systems (two systems for the Java EE application server and one system for the database server). SPEC did not explicitly restrict scalability in the benchmark.

Q40: How well does the benchmark scale in both scale-up and scale-out configurations?

A40: SPECjEnterprise2010 has been designed and tested with both scale-up and scale-out configurations. The design of the benchmark does not limit the scaling in either way. How well it scales in a particular configuration depends largely on the capabilities of the underlying hardware and software components.

Q41: Can I report with vendor A hardware, vendor B Java EE Application Server, and vendor C database software?

A41: The SPECjEnterprise2010 Run and Reporting Rules do not preclude third-party submission of benchmark results, but result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor.

Q42: Can I use Microsoft SQL Server for the database?

A42: Yes. You can use any relational database that is accessible by JDBC and satisfies the SPECjEnterprise2010 Run and Reporting Rules.

Q43: Can I report results for public domain software?

A43: Yes, as long as the product satisfies the SPECjEnterprise2010 Run and Reporting Rules.

Q44: Are the results independently audited?

A44: No, but they are subject to committee review prior to publication.

Q45: Can I announce my results before they are reviewed by the SPEC Java subcommittee?

A45: No.

Q46: How realistic is the DB size for medium and large configurations?

A46: The following table shows the approximate raw data size used to load the database for different benchmark injection rates:

IR Size
100 430MB
500 2.1GB
1000 4.2GB

Actual storage space consumed by the RDBMS and all the supporting structures (i.e. indices) is far higher, however. It is not unreasonable, for example, to have the database consuming 5GB of disk space to support runs at IR=100. There are a large number of factors -- both RDBMS- and configuration-dependent -- that influence the actual disk space required.

Q47: Can you describe the DB contents? Do you have jpegs or gifs of cars, or any dynamic content such as pop-ups or promotional items?

A47: The DB comprises text and numeric data. We do not include jpegs or gifs as these are better served by static web content than in the DB. We do not include dynamic content as this represents web content and is usually not part of general DB usage. The client-side processing of such content is not measured in SPECjEnterprise2010.

Q48: If the size of the DB is very small, almost all of it can be cached. Is this realistic?

A48: We have significantly increased the database size in SPECjEnterprise2010. While still relatively small, the chances of caching the whole database in memory have been significantly reduced. Since SPECjEnterprise2010 focuses on evaluating application server performance, a small but reasonably sized database seems to be far more appropriate than using database sizes equivalent to the ones used in pure database benchmarks.

Q49: What is typically the ratio of read vs. write/update operations on the DB?

A49: An exact answer to this question is not possible, because it depends on several factors, including the injection rate and the application server and database products being used. Lab measurements with a specific application and database server at an injection rate of 80 have shown a database read vs. write/update ratio of approximately 4. Your mileage may vary.

Q50: Why didn't you select several DB sizes?

A50: The size of the database data scales stepwise, corresponding to the injection rate for the benchmark. Multiple scaling factors for database loading would add another different category. Since we are trying to measure application server performance, it is best to keep the database scaling linear for all submissions.

Q51: In this benchmark, the size of the DB is a step function of the IR. This makes it difficult to compare beyond each step -- between configuration reporting with IR=50 and IR=65, for example, as both of them have a different-sized database. Wouldn't it be more fair to compare the same-sized DB?

A51: No. As we increase the load on the application server infrastructure, it is realistic to increase the size of the database as well. Typically, larger organizations have a higher number of transactions and larger databases. Both the load injection and the larger database will put more pressure on the application server infrastructure. This will ensure that at a higher IR the application server infrastructure will perform more work than at a lower IR, making the results truly comparable.

Q52: I have heard that DB performance had a significant influence on previous SPECjAppServer benchmarks. What have you done to reduce this influence?

A52: In SPECjEnterprise2010, a significant amount of functionality has been incorporated into the application server layer (e.g., servlets, JSPs, JMS). As a result, the influence of the database relative to the application server has been reduced somewhat. In addition, the scaling of the database has been increased, which results in a more realistic configuration with reduced table/row contention. The database continues to be a key component of the benchmark, however, since it is representative of a typical Java EE application. Because of this fact, database configuration and tuning will continue to be very important for performance.

Q53: Assuming a similar hardware configuration, what would be a typical ratio of application server CPUs to DB server CPUs?

A53: This question cannot be answered accurately. We have seen vastly different ratios depending on the type and configuration of the application server, database server, and even the JDBC driver.

Q54: Are results sensitive to components outside of the SUT -- e.g., client driver machines? If they are, how can I report optimal performance for a) fewer powerful driver machines or b) larger number of less powerful driver machines?

A54: SPECjEnterprise2010 results are not that sensitive to the type of client driver machines, as long as they are powerful enough to drive the workload for the given injection rate. Experience shows that if the client machines are overly stressed, one cannot reach the throughput required for the given injection rate.

Q55: This is an end-to-end solution benchmark. How can I determine where the bottlenecks are? Can you provide a profile or some guidance on tuning issues?

A55: Unfortunately, every combination of hardware, software, and any specific configuration poses a different set of bottlenecks. It would be difficult or impossible to provide tuning guidance based on such a broad range of components and configurations. As we narrow down to a set of products and configurations, such guidelines are more and more possible. Please contact the respective software and/or hardware vendors for tuning guidance using their products.

Q56: Is it realistic to use a very large configuration that would eliminate typical garbage collection? How much memory is required to eliminate GC for IR=100, IR=500, and IR=1000?

A56: Section 2.9.1 of the Run Rules state that the steady state period must be representative of a 24-hour run. This means that if no garbage collection is done during the steady state, it shouldn't be done during an equivalent 24-hour run. Due to the complexity of the benchmark and the amount of garbage it generates, it is unrealistic to configure a setup to run for 24 hours without any GC. And even if it is possible, such memory requirements have not yet been established and will vary according to many factors.

Q57: Do Log4J v1 or v2 vulnerabilities exist in the benchmark?

A57: Not by default. While there is a known Critical vulnerability in Log4J v1 (CVE-2019-17571) associated with the implementation class file org.apache.log4j.net.SocketServer.class class, the default benchmark installation does not use that class. Instead, java.util.logging.SocketHandler.class is used. Users are strongly recommended to avoid using org.apache.log4j.net.SocketServer in all circumstances.

Q58: Can the Log4j java archive be removed entirely from the benchmark deployment?

A58: The Log4j archive can be completely removed from the benchmarks.
For 2010 Full Profile:

These changes are not expected to affect the performance results produced by the test harness.

Java, Java EE and ECperf are trademarks of Oracle.

TPC-C and TPC-W are trademarks of the Transaction Processing Performance Council.

SQL Server is a trademark of Microsoft Corp.

Product and service names mentioned herein may be the trademarks of their respective owners.

Copyright © 2001-2012 Standard Performance Evaluation Corporation