SPEC_sipInfrastructure2011 User's Guide

Revision Date: Feb 7th, 2011
Table of Contents


These instructions assume that you are familiar with the following: The SPECsip_Infrastructure2011 kit is distributed as an InstallAnywhere installer and contains the following components: Additional Software Requirements: Hardware Requirements: Note 1: The minimal hardware requirements to run the benchmark is a SUT and a harness master which doubles as a harness client. However, a typical configuration will be the SUT, a harness master, and a few clients. Multiple clients are used to ensure clients are not the SIP processing bottleneck. Support FAQ lists "Watchdog Timeout" symptom when clients have become the bottleneck.

Setting Up The Test Environment

Running the SPECsip_Infrastructure2011 installer

To set up the benchmark, use the InstallAnywhere installer that contains all the components needed to run the benchmark (except the SIP server needed on the SUT). Currently the installer option only supports Solaris and Linux operating systems, that run on the x86/x64 architecture.
Installer Option
The installer may come from a CD or be downloaded from the SPEC SIP Infrastructure benchmark page.

SUT Setup

Client Setup

A client is the machine which hosts the load generator. The benchmark requires one or more clients. Each client also hosts a Faban agent to communicate with the Faban master. It is critical to plan your client need for running the benchmark such that client capacity does not become bottleneck. A system based on 4-core Xeon processor typically can drive the workload to about 0.5M Supported Subscribers. If the intended benchmark performance number is higher, multiple clients will be required. Here are the requirements for setting up the clients.

Network Setup

The benchmark requires network connectivity between the harness master, harness clients, and the SUT. While not required, we recommend for performance reasons to use two separate physical networks: one for control traffic between the master and the clients, and another for data traffic between the clients and the SUT.

Running SPECsip_Infrastructure2011

Once the Harness Master, Harness Clients and SUT have been configured and networked, start the Harness Master: # <SPECSIP_ROOT directory>/harness.sh start http://<hostname>:9980/

Control Frame

The benchmark control panel located on the left frame of the UI allows one to manage the benchmark runs.
Benchmark Control Panel When the harness is used for the first time, or when one selects Switch Profile, a profile configuration page is brought up. One can select an existing profile from the pull-down menu on the right, or create a new profile by filling in the profile name in the space on the left. One can also enter one or multiple tags, which can be used to search run results later.
Benchmark profile selection When using the same rig to test multiple SUT systems, it is handy to create a profile for each SUT so the configuration can be saved for the SUT. Upon switching the SUT, the previous configuration for it can be brought up promptly without re-entering the configuration parameters.

System Information

This tab contains information required for benchmark disclosure, and the Java run time environment to run the benchmark harness.
System Information UI Page
  1. System information disclosure as required by SPEC policy and SPECsip-infrastructure Run Rules
  2. The Java Section

Benchmark Configuration

The Benchmark Configuration tab consists of four sections: Java, Run, Clients, and SUT.
  1. The Run Section
    Configuration of run parameters
  2. The Clients Section
  1. The SUT Section
    SIP Server (SUT) Configuration
Typically, each client runs three instances of SIPp, corresponding to the three SIPp scenarios UAC, UAS and UDE. Therefore the data ports for the three instances need to be unique, such as 5060, 5061, and 5062.

Workload Configuration

The Workload Configuration tab consists of three sections: Traffic Profile, Traffic Parameters, and SIPp Scenario Configuration.
Traffic Profile Configuration
1. Traffic Profile 2. Traffic Parameters Call Parameters
3. Completed Calls These are parameters used for controlling the scenario of Completed Calls. Leave them as default for a conforming benchmark run. Refer to the Design Document for details of these parameters. 4. Voice Mail Calls These are parameters used for controlling the scenario of Voice Mail Calls. Leave them as default for a conforming benchmark run. Refer to the Design Document for details of these parameters. 5. Canceled Calls These are parameters used for controlling the scenario of Completed Calls. Leave them as default for a conforming benchmark run. Refer to the Design Document for details of these parameters.

Debug Setting

The Debug Setting tab consists of the Parameters for Debugging, which allows one to configure Debug Trace Level. When debug log is generated, the logs can be located at in $SPECSIP_ROOT/working directory of each client. At the end of the run, these files are retrieved from the clients and saved in the run output directory. Note that when trace log is generated, the file size consumes disk space quickly. So be reminded to clean up the disk after debugging is done. Also, logging to disk may cause performance degradation on the clients.

Starting the Run

After you have finished the above steps, click "OK" at the bottom of any of the main tabs. This will start the run that you have configured, if none is currently running, or add them to the run queue if another run is in progress. The run will last for a period of time that is the sum of the warmup time, measurement period, and cool down time. During the run, statistics are gathered only during the measurement period.

Viewing Results

The Summary Result tab contains the following info. The Detailed Results tab contains plots to visualize the performance statistics gathered.

After the Run

Sanity Checking the Benchmark Rig

If you have encountered problems, check the following items below or consult the Support FAQ.

Collecting Results and Submitting to SPEC

Each benchmark is assigned by Faban a run ID in the form of specsip.[1-][A-Z]. For example, the run ID of the first run is specsip.1A, and second run, specsip.1B. At the end of run, the statistics and logs generated by SIPp on the clients are retrieved from the clients and stored in the $SPECSIP_ROOT/run/faban/output/ directory. For example, the output of the benchmark run specsip.1A is stored in $SPECSIP_ROOT/run/faban/output/specsip.1A. The statistics are parsed by a post-processing script and the results are shown in the Summary Result and Detailed Results tabs corresponding to this run ID. See Viewing Results section below for description of the results. To submit the benchmark result to SPEC, the submitter should tar and zip the output directory (e.g. $SPECSIP_ROOT/run/faban/output/specsip.1A mentioned above) and email the resulting .tar.gz (or .tgz) file to subsipinf2011@spec.org.

Optimizing Performance

Third Party Sources

The SPECsip_Infrastructure2011 benchmark utilizes the following open source libraries and tools, for which source code and a copy of the license is included: Copyright © 2011 Standard Performance Evaluation Corporation. All rights reserved. Java® is a registered trademark of Sun Microsystems.