SPECsfs97_R1.v3 Result =============================================================================== EMC Corp. : Celerra NS700G Cluster / CLARiiON CX700x2 SPECsfs97_R1.v3 = 71482 Ops/Sec (Overall Response Time = 2.54 msec) =============================================================================== Throughput Response ops/sec msec 5881 0.9 11936 1.0 18030 1.3 23960 1.6 29803 1.9 35921 2.1 41988 2.5 48025 2.7 54120 3.3 59937 3.6 66243 4.6 71482 11.2 =============================================================================== Server Configuration and Availability Vendor EMC Corp. Hardware Available February 2004 Software Available February 2004 Date Tested February 2004 SFS License number 47 Licensee Location Hopkinton, MA CPU, Memory and Power Model Name Celerra NS700G Cluster / CLARiiON CX700x2 Processor 3.06GHz Intel Xeon P4 # of Processors 4 (2 per Data Mover) Primary Cache Execution Trace Cache 12K uOPs + Data Cache 8KB Secondary Cache 512KB (unified) Other Cache N/A UPS N/A Other Hardware N/A Memory Size 8 GB (4GB per Data Mover) NVRAM Size N/A NVRAM Type N/A NVRAM Description N/A Server Software OS Name and Version Dart 5.2 Other Software Redhat Linux 7.2 on Control Station File System UxFS NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 4 per Data Mover Fileset Size 689.3 GB Network Subsystem Network Type Jumbo Gigabit Ethernet Network Controller Desc. Gigabit Ethernet Controller Number Networks 1 Number Network Controllers 4 (2 per Data Mover) Protocol Type TCP Switch Type Cisco 6509 Gbit Switch (Jumbo) Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 8 [2 per Storage Processor (SP), 4 SPs] Number of Disks 248 Number of Filesystems 2 File System Creation Ops N/A File System Config Each filesystem is striped (8KB element size), across 120 disk (40 LUNs) Disk Controller Integrated 2Gb Fibre Channel # of Controller Type 8 [2 per SP, 4 SPs] Number of Disks 248 (dual ported) Disk Type Seagate Fibre Channel 146GB 10k-rpm File Systems on Disks OS and UxFS Log1 (5), UxFS Log2 (3), Filesystem fs1 (120), Filesystem fs2 (120) Special Config Notes 4GB cache per SP on storage array. 3072 MB mirrored write cache per SP. See notes below. Load Generator (LG) Configuration Number of Load Generators 22 Number of Processes per LG 32 Biod Max Read Setting 5 Biod Max Write Setting 5 LG Type LG1 LG Model Sun Sparc 220R Number and Type Processors 2 x 450 MHz Sparc Memory Size 512 MB Operating System Solaris 2.8 Compiler Sun Forte 6.2 Compiler Options None Network Type Intraserver Gbe LG Type LG2 LG Model Sun Sparc 420R Number and Type Processors 2 x 450 MHz Sparc Memory Size 1 GB Operating System Solaris 2.8 Compiler Sun Forte 6.2 Compiler Options None Network Type Intraserver Gbe LG Type LG3 LG Model Sun Sparc 220R Number and Type Processors 2 x 450 MHz Sparc Memory Size 1 GB Operating System Solaris 2.8 Compiler Sun Forte 6.2 Compiler Options None Network Type Intraserver Gbe Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1..12 LG1 1 /fs1,/fs2../fs1,/fs2 N/A 13..21 LG2 1 /fs1,/fs2../fs1,/fs2 N/A 22 LG3 1 /fs1,/fs2../fs1,/fs2 N/A =============================================================================== Notes and Tuning <> Server tuning: <> param ufs inoBlkHashSize=170669 (inode block hash size) <> param ufs inoHashTableSize=1218761 (inode hash table size) <> param ufs updateAccTime=0 (disable access-time updates) <> param nfs withoutCollector=1 (enable NFS-to-CPU thread affinity) <> param file prefetch=0 (disable Dart read prefetch) <> param mkfsArgs dirType=DIR_COMPAT (Compatability mode directory style) <> param kernel maxStrToBeProc=16 (no. of network streams to process at once) <> param kernel outerLoop=8 (8 consecutive iterations of network packets processing) <> param kernel heapReserve=90000 (Reserve memory frames for the heap) <> file initialize nodes=1000000 dnlc=1656000 (no. of inodes, no. of dirctory name lookup cache entries) <> nfs start openfiles=980462 nfsd=4 (no. of open files and 4 NFS daemons) <> param ufs nFlushCyl=32 (no. of UxFS cylinder group blocks flush threads) <> param ufs nFlushDir=32 (no. of UxFS directory and indirect blocks flush threads) <> param ufs nFlushIno=32 (no. of UxFS inode blocks flush threads) <> camconfig nexus depth=16 order=0x91 limit=10 (increase the I/O queue depth to 16) <> param fcTach per_target_q_length=1024 (number of exchange control blocks per target) <> param fcTach device_q_length=1024 (number of exchange control blocks per device) <> <> Storage array notes: <> Each of the two CX700 storage arrays was configured into 40 <> RAID-0 groups of 3 spindles (120*2=240 total disk spindles). <> The stripe size for all RAID-0 LUNs was 8KB and each LUN was 30GB. <> Each filesystem is built on a volume made by striping across all 40 LUNs on a CX700 <> storage array with an 8KB stripe size. <> Two Fibre Channel ports from each of the two Storage Processors on <> each of the two CX700 storage arrays (SPs labeled A, B, C & D, ports <> labeled 1 & 2), were connected to a Brocade 3900 FibreChannel <> switch. Two ports from each of two Data Movers (labeled X & Y, ports <> labeled 3 & 4) were also connected to the same switch. Four zones, <> one for each DM port, connected to each of the 8 SP ports, were <> created as follows. <> Zone1: X3, A1,A2,B1,B2,C1,C2,D1,D2 <> Zone2: Y3, A1,A2,B1,B2,C1,C2,D1,D2 <> Zone3: X4, A1,A2,B1,B2,C1,C2,D1,D2 <> Zone4: Y4, A1,A2,B1,B2,C1,C2,D1,D2 <> The default size of the UFS log (64MB) was used for both logs. One of the UFS logs and OS <> shared the same 4+1 RAID-5 group on one of the CX700 storage arrays. The second UFS <> log was configured on one RAID-0 group of 3 spindles on the other CX700 storage array. <> The storage array has dual storage processor units that work as <> an active-active failover pair. The mirrored write cache is backed up <> with a battery unit capable of saving the write cache to disk in the event of <> a power failure. In the event of a storage array failure, the second <> storage processor unit is capable of saving all state that was managed <> by the first (and vise-versa), even with a simultaneous power failure. <> When one of the storage processors or battery units are off-line, the <> system turns off the write cache and writes directly to disk before <> acknowledging any write operations. <> The length of time the battery can support the retention of data is 2 <> minutes - that is sufficient to write all necessary data twice. <> Storage processor A could have written 99% of its memory to disk and <> then fail. In that case storage processor B has enough battery to <> store its copy of A's data as well as its own. =============================================================================== Generated on Tue Mar 2 16:41:32 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2004 Standard Performance Evaluation Corporation