SPECsfs97_R1.v3 Result =============================================================================== BlueArc Corporation : Titan 3200 SPECsfs97_R1.v3 = 194909 Ops/Sec (Overall Response Time = 2.43 msec) =============================================================================== Throughput Response ops/sec msec 19067 0.6 38574 0.9 57677 1.3 77230 1.6 96907 1.9 116126 2.3 135465 2.8 155382 4.0 174307 4.9 194909 7.6 =============================================================================== Server Configuration and Availability Vendor BlueArc Corporation Hardware Available March 2008 Software Available May 2008 Date Tested February 2008 SFS License number 000063 Licensee Location SanJose, CA CPU, Memory and Power Model Name Titan 3200 Processor AMD Opteron 248 2.2Ghz + FPGAs # of Processors 1 core, 1 chip, 1 core/chip + 13 FPGAs Primary Cache 128KB(I+D) on chip Secondary Cache 1MB(I+D) on chip Other Cache 16 GB UPS N/A Other Hardware N/A Memory Size 82.6 GB (incl. other_cache_size, nvram_size and raid controller cache.) NVRAM Size 4 GB NVRAM Type DIMM NVRAM Description 72 hour battery backed Server Software OS Name and Version SU 5.2 Other Software N/A File System BlueArc Silicon File System with Cluster Name Space (CNS) NFS version 3 Server Tuning Buffer Cache Size N/A # NFS Processes N/A Fileset Size 1847.0 GB Network Subsystem Network Type Integrated Network Controller Desc. 2-port, 10Gbps Ethernet Number Networks 1 (N0) Number Network Controllers 1 Protocol Type TCP Switch Type 1 Force10 S2410 10GigE Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 1 Number of Disks 384 Number of Filesystems 1 (F1) File System Creation Ops 4KB block size File System Config 6 Individual file system volumes aggregated using CNS to present a single, unified namespace Disk Controller Integrated 8 port 4Gbps FC # of Controller Type 1 Number of Disks 384 Disk Type ST373455FC File Systems on Disks f1-f6 Special Config Notes NS1 = /r (see notes) Load Generator (LG) Configuration Number of Load Generators 6 Number of Processes per LG 246 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG0 LG Model White Box, Tyan S2915 motherboard Number and Type Processors Opteron 2 X dual core 2.6Ghz Memory Size 8 GB Operating System Solaris 10 u4 Compiler SFS97_R1 precompiled binaries Compiler Options N/A Network Type Myricom 10GigE Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG0 0 /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6 =============================================================================== Notes and Tuning <> The tested system was a BlueArc Titan 3200 server connected via Fibre Channel fabric to six storage arrays. Each array consisted of 1 BlueArc RC16TB (LSI 3992) Dual RAID controller with 64 FC drives. Each Dual RAID controller set has 2GB cache memory, for a total of 12GB cache memory across all storage controllers. RAID controller cache memory is included in the 82.6GB memory_size listed above. <> The Titan server had the high memory option installed and all standard protection services enabled, including RAID, NVRAM logging, and media error scrubbing. <> Titan 3200 uses 13 Field Programmable Gate Arrays (FPGAs) to accelerate processing of network traffic and file system I/O. <> Each storage array consisted of one BlueArc RC16TB (LSI 3992) Dual RAID controller with 64 73GB, 15,000 RPM, 4GB-FC Seagate ST373455FC drives. <> Disk and FS configuration was 32 "1+1" RAID-1 LUs per RAID Controller pair. Each RAID Controller pair represented one Storage Pool created by parallel striping across the 32 LUs. A single file system was created within each Storage Pool. The six file systems were aggregated to a single namespace "/r" using BlueArc's Cluster Namespace (CNS) feature. <> The storage arrays were connected to the Titan server using redundant Brocade 200E FC switches with dual redundant connections to each array. <> The Titan 3200 server is connected to 6 Load Generators via 10GigE (end to end) through a single Force10 S2410 switch. <> For Uniform Access Rule compliance all LG's accessed all cluster namespace objects uniformly across all interfaces as follows: <> - There is 1 network node (i.e., Titan 3200 server): T0 <> - There are 6 physical target file systems (/r/f1…./r/f6) aggregated and presented as a single cluster name space with virtual root “/r” (=NS1) accessible to all clients. <> - All file systems are collectively owned by a single Virtual Server with a single IP address. <> - Each Load Generator (1-6) cycled through the target file systems /r/f1, /r/f2, /r/f3, etc. in sequence. <> Titan 3200 contains four modules that perform all the storage operations, as follows: NIM3 = Network Interface Module (TCP/IP, UDP handling); FSX1 and FSB3 = File System Modules (NFS and CIFS protocol handling, plus cluster interconnect on FSB3. Note cluster interconnect was present but not used for this test run.); and SIM3=Storage Interface Module (Disk controller / FC interface) <> Titan 3200 with himem option has 70.6 gigabytes (GB) of memory, cache and NVRAM distributed within the Titan modules as follows: <> - NIM3 - 3.6 GB memory per Titan <> - FSX1 - 4.0 GB memory per Titan <> - FSB3 - 43.5 GB memory per Titan, of which 4.0 is NVRAM and 24.0 is FS metadata cache. The remaining amount is used for buffering data moving to/from the disk drives and/or network. <> - SIM3 - 19.5 GB memory per Titan, of which 16.0 is "sector" cache used for interface with RAID controllers and disk subsystem. This is the "other cache size" in the Titan as noted above. <> For "stable storage" requirement, the Titan server writes first to battery backed (72 hours) NVRAM internal to the Titan. Data from NVRAM is then written to the the drive arrays as convenient, but always within a few seconds of arrival in NVRAM. <> Server tuning: <> - Disable file read-ahead, "read ahead -- disable" <> - Disable shortname generation for CIFS clients "shortname -g off" <> - Server running in "Native Unix" security mode, "security-mode set unix". <> - Set metadata cache bias to small files: "cache-bias --small-files" <> - Accessed time management was turned off: "fs-accessed-time set off" <> - Jumbo frames were enabled. =============================================================================== Generated on Wed May 14 17:14:59 EDT 2008 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2008 Standard Performance Evaluation Corporation