Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 
 

Defects Identified in SFS97 V2.0

June 14, 2001 - The Standard Performance Evaluation Corporation (SPEC) has identified significant defects in its SFS97 benchmark suite. SPEC has suspended sales of the SFS97 benchmark and is no longer accepting new submission of SFS97 results for publication on SPEC's website (www.spec.org).

SPEC is advising SFS licensees and users of the SPECsfs97.v2 and SPECsfs97.v3 metrics that several recently uncovered bugs impact the comparability of results. These flaws can impact the amount of work done during the measurement periods and can greatly reduce the expected working set size. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

The SPEC Open Systems Steering Committee and the SFS subcommittee will be working to address issues related to the current published results. A more detailed analysis of the effects of the bugs on the published results is forthcoming along with details on the actual working set size used in each test.

The SPEC SFS subcommittee is working to revise the benchmark to eliminate these defects and plans to issue a new release of SFS as soon as possible. Current licensees will receive a new copy of the new release when it becomes available.

A summary of the three SFS97 bugs is given below, please be sure to review this before utilizing any SPECsfs97 data or running the SFS benchmark for system comparisons:

1. The oscillation defect:

The benchmark tries to establish a steady load. The algorithm can oscillate between doing no work for 10 seconds and work for 10 seconds. This is due to the algorithm that ramps up and slows down to try to establish the steady load having insufficient resolution in the msec_sleep() code. This bug can show up causing the next sleep period to be the entire period. The problem may not occur if the clients are older or slower clients, or if the type of networking used prevents the clients from delivering too quickly. This problem is detectable by monitoring the load on the server. There is a significant probability of encountering this defect when the requested ops/sec/process exceeds 500 and is unlikely to be encountered below 250 requested ops/sec/process.

2. The distribution defect:

The file set selection algorithm in the current SFS 2.0 is not working correctly. The distribution of the files to be used is undergoing a zero rounding error and the total working set of files is not as it was intended. At values of between 26 to 400 requested ops/sec/process, the number of access groups in the file set is not increased as intended. At 200 requested ops/sec/process only ~50% of the intended number of access groups are included. As the value of requested ops/sec/process goes considerably above 500 the impact becomes much more likely to be visible as the number of access groups becomes very small and the total number of files being accessed is also significantly reduced.

3. The floating-point overflow defect:

This defect begins kicking in when the number of ops/sec/proc is ~500 or greater. The algorithm overflows a double-precision floating-point variable and the file selection algorithm deteriorates. This defect puts large negative values into alternate entries of the probability distribution array. At runtime the access groups are selected using a binary search algorithm which assumes monotonically-increasing entries in the array. The negative entries disrupt the algorithm. Somewhere around 1000 requested ops/sec/process the binary search algorithm degenerates to selecting a single access group.

For more information, consult the technical whitepaper on the defects or contact SPEC.