SPECsfs97_R1.v3 Result =============================================================================== Hitachi Data Systems : Hitachi High performance NAS Platform SPECsfs97_R1.v3 = 192915 Ops/Sec (Overall Response Time = 1.72 msec) =============================================================================== Throughput Response ops/sec msec 20205 0.5 40184 0.8 60639 1.0 80887 1.2 101397 1.5 121533 1.8 142056 2.2 162632 2.9 182599 4.0 192915 5.6 =============================================================================== Server Configuration and Availability Vendor Hitachi Data Systems Hardware Available February 2006 Software Available March 2007 Date Tested May 2007 SFS License number 0000xx Licensee Location Santa Clara, CA CPU, Memory and Power Model Name Hitachi High performance NAS Platform 2200, 2 node active/active cluster Processor 1.2 GHz PPC 7457B + FPGAs # of Processors 2 core, 2 chip, 1 core/chip + 14 FPGAs Primary Cache 64KBD(I+D) on chip per cpu Secondary Cache 512KB(I+D) on chip per cpu Other Cache 16 GB in Hitachi High performance NAS Platform servers UPS N/A Other Hardware Redundant Brocade 4100 FC switches between Hitachi High performance NAS Platform server and USP 1100 Memory Size 47.6 GB in Hitachi High performance NAS Platform servers (not including other_cache_size and NVRAM), 80 GB in HDS USP 1100 NVRAM Size 4 GB in Hitachi High performance NAS Platform servers NVRAM Type DIMM NVRAM Description 72 hour battery backed Server Software OS Name and Version SU 4.3 Other Software N/A File System BlueArc Silicon File System with Cluster Name Space (CNS) NFS version 3 Server Tuning Buffer Cache Size N/A # NFS Processes N/A Fileset Size 1933.8 GB Network Subsystem Network Type Integrated Network Controller Desc. 6-port, 1Gbps Ethernet, aggregated Number Networks 1 (N1) Number Network Controllers 2 Protocol Type TCP Switch Type Two stacked Dell PowerConnect 6248 (N1) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 4 Number of Disks 384 Number of Filesystems 1 namespace (NS1,see notes) File System Creation Ops 4KB block size File System Config 16 Individual file system volumes (v0...v15) aggregated using CNS to present a single, unified namespace (/r), 8 volumes per node Disk Controller Integrated quad port 4Gbps FC # of Controller Type 2 Number of Disks 192 Disk Type Seagate Cheetah 146GB 15K RPM File Systems on Disks NS1 = /r Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 28 Number of Processes per LG 32 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model AMAX 1U Number and Type Processors 2x2.8 GHz Xeon Memory Size 1 GB Operating System Linux Fedora Core 5 Compiler SFS97_R1 precompiled binaries Compiler Options N/A Network Type Integrated Intel Pro/1000MT Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 2 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 3 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 4 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 5 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 6 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 7 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 8 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 9 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 10 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 11 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 12 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 13 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 14 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 15 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 16 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 17 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 18 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 19 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 20 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 21 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 22 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 23 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 24 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 25 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 26 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A 27 LG1 N1 /r/v0, /r/v0, /r/v0, /r/v0, /r/v1, /r/v1, /r/v1, /r/v1, /r/v2, /r/v2, /r/v2, /r/v2, /r/v3, /r/v3, /r/v3, /r/v3, /r/v4, /r/v4, /r/v4, /r/v4, /r/v5, /r/v5, /r/v5, /r/v5, /r/v6, /r/v6, /r/v6, /r/v6, /r/v7, /r/v7, /r/v7, /r/v7 N/A 28 LG1 N1 /r/v8, /r/v8, /r/v8, /r/v8, /r/v9, /r/v9, /r/v9, /r/v9, /r/v10, /r/v10, /r/v10, /r/v10, /r/v11, /r/v11, /r/v11, /r/v11, /r/v12, /r/v12, /r/v12, /r/v12, /r/v13, /r/v13, /r/v13, /r/v13, /r/v14, /r/v14, /r/v14, /r/v14, /r/v15, /r/v15, /r/v15, /r/v15 N/A =============================================================================== Notes and Tuning <> The tested system was a Hitachi High performance NAS Platform 2200 2-node cluster with an HDS Universal Storage Platform (USP) 1100 storage array. <> Hitachi High performance NAS Platform servers were directly connected with dual 10Gb Ethernet cluster links. <> Hitachi High performance NAS Platform servers had high memory option installed and all standard protection services enabled, including RAID, NVRAM logging, and media error scrubbing. <> Hitachi High performance NAS Platform uses Field Programmable Gate Arrays (FPGAs) to accelerate processing of network traffic and file system I/O. 14 (7 per Hitachi High performance NAS Platform) FPGAs are used for NFS processing out of a total of 24 (12 per Hitachi High performance NAS Platform) FPGAs in the system. <> The USP 1100 was configured with 4 pairs of Front End Directors (FED), 384 15k RPM FC disks and 80 GB cache. <> - Disk and FS configuration was 96 "2+2" RAID 10 LUs, 16 Hitachi High performance NAS Platform storage pools (striped across 6 LUs each), 1 FS per storage pool. One hot spare disk was present. <> - Four ports per FED were used. Each LU was exported via two ports for redundancy. <> - The USP 1100 was connected to the Hitachi High performance NAS Platform servers using redundant Brocade 4100 FC switches <> For Uniform Access Rule compliance all LG's accessed all cluster namespace objects uniformly across all interfaces as follows: <> - There are 2 network nodes: T0, T1 (Hitachi High performance NAS Platform 0 and Hitachi High performance NAS Platform 1) <> - There are 16 physical target file systems (/r/v0…./r/v15) presented as a single cluster name space (NS1) with virtual root “/r” accessible to all clients. <> - Each target file system belongs to one of the 2 network nodes. Each target file system is physically located on a separate storage pool consisting of six 2x2 RAID 10 LUNs per pool. Filesystems: N0: /r/v0, /r/v1, /r/v2, /r/v3, /r/v8, /r/v9, /r/v10, /r/v11; N1: /r/v4, /r/v5, /r/v6, /r/v7, /r/v12, /r/v13, /r/v14, /r/v15 <> - Each Node has two Virtual Servers, each with a separate own IP address. 2 Nodes X 2 Virtual Servers each = 4 possible paths for each client to each file system target. Virtual Servers: N0: VS1, VS3; N1: VS2, VS4 <> - 4 Virtual Servers X 16 File systems = 64 total data paths. Each LG ran 32 processes and addressed 8 of the 16 file systems across 4 Virtual Servers, or 32 total data paths, uniformly distributed across both Nodes. LGs picked alternating target file systems so that every 2 LGs addressed all 64 data paths. <> - Each Load Generator (LG1…LG28) selects a file system target and rotates through the 4 paths to that file system target, then rotates to the next file system target and repeats the path rotation. 50% of the data paths accessed by each client were local to the Hitachi High performance NAS Platform owning the IP address used to connect and 50% were accessed across the inter-cluster link using Cluster Name Space to insure a uniform spread of workload across all clients and all data paths. <> - LG1: T0, VS1, /r/v0; T1, VS2, /r/v0; T0, VS3, /r/v0; T1, VS4, /r/v0; T0, VS1, /r/v1; T1, VS2, /r/v1; T0, VS3, /r/v1; T1, VS4, /r/v1; and so forth. <> Each Hitachi High performance NAS Platform contains four modules that perform all the storage operations, as follows: NIM = Network Interface Module (TCP/IP, UDP handling); FSA and FSB = File System Modules (NFS and CIFS protocol handling); and SIM=Storage Interface Module (FC interface and Cluster Interconnect) <> Each Hitachi High performance NAS Platform 2200 has 33.8 gigabytes (GB) of memory, cache and NVRAM distributed within the Hitachi High performance NAS Platform modules as follows: <> - NIM - 2.8 GB memory per Hitachi High performance NAS Platform <> - FSA - 4.0 GB memory per Hitachi High performance NAS Platform <> - FSB - 17.25 GB memory per Hitachi High performance NAS Platform, of which 2.0 is NVRAM and 14.5 is FS metadata cache. The remaining amount is used for buffering data moving to/from the disk drives and/or network. <> - SIM - 9.75 GB memory per Hitachi High performance NAS Platform, of which 8.0 is "sector" cache used for interface with RAID controllers and disk subsystem. This is the "other cache size" in the Hitachi High performance NAS Platform as noted above. <> For "stable storage" requirement, Hitachi High performance NAS Platform writes first to NVRAM internal to the Hitachi High performance NAS Platform. Data from NVRAM is then written to the USP 1100 cache, which is mirrored and battery backed for up to 48 hours, and from the USP 1100 cache data is written to disk. In the event of power failure the USP will flush all data in cache to disk before battery is exhausted. <> Server tuning: <> - Disable file read-ahead "fsm read-ahead 0 0" <> - Disable shortname generation for CIFS clients "shortname -g off" <> - Server running in "Native Unix" security mode. <> - Set metadata cache bias to small files: "fsm cache-bias --small-files" <> - Accessed time management was turned off: "fs-accessed-time set off" <> - Jumbo frames were not enabled. =============================================================================== Generated on Tue Jul 31 18:44:23 EDT 2007 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2007 Standard Performance Evaluation Corporation