SPECsfs97_R1.v3 Result =============================================================================== Panasas, Inc. : Panasas ActiveScale storage cluster (10 Dire SPECsfs97_R1.v3 = 50907 Ops/Sec (Overall Response Time = 1.67 msec) =============================================================================== Throughput Response ops/sec msec 5018 0.5 10161 0.7 15166 0.8 20182 0.9 25374 1.1 30451 1.2 35499 1.4 40597 2.3 45741 4.3 50907 6.8 =============================================================================== Server Configuration and Availability Vendor Panasas, Inc. Hardware Available October 2003 Software Available January 2004 Date Tested December 2003 SFS License number 250 Licensee Location Fremont, CA CPU, Memory and Power Model Name Panasas ActiveScale storage cluster (10 DirectorBlades) Processor 2.4 GHz Intel LV Xeon # of Processors 10 (1 per DirectorBlade) Primary Cache 12K uops I + 8KB D on-chip Secondary Cache 512KB on-chip Other Cache N/A UPS Integrated into Panasas Shelf Other Hardware N/A Memory Size 40 GB (4 GB per DirectorBlade) NVRAM Size All memory is UPS protected NVRAM Type UPS-protected DRAM NVRAM Description Cache commit with UPS protection, flushed to local disk on failure with recovery software (see notes) Server Software OS Name and Version Panasas ActiveScale V 1.2 Other Software N/A File System Panasas ActiveScale File System NFS version 3 Server Tuning Buffer Cache Size Dynamic # NFS Processes 500 (50 per DirectorBlade) Fileset Size 483.5 GB Network Subsystem Network Type Gigabit Ethernet (standard 1500 byte frames) Network Controller Desc. Integrated Broadcom NetXtreme BCM5703 Number Networks 1 (N1) Number Network Controllers 10 (1 per DirectorBlade) Protocol Type TCP Switch Type Extreme BlackDiamond 6816 Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 10 (each GE NIC is also an iSCSI HBA) Number of Disks 90 (45 OSD StorageBlades - 2 disks per blade) Number of Filesystems 1 namespace (see notes) File System Creation Ops default File System Config 1 Bladeset, RAID 5 per file, RAID width of 10 blades (see notes) Disk Controller Integrated Broadcom NetXtreme BCM5703 (as iSCSI HBA and host NIC) # of Controller Type 10 Number of Disks 90 (45 OSD StorageBlades - 2 disks per blade) Disk Type 100300-001 240 GB StorageBlade (two 120GB, 7200 RPM, S-ATA disks per blade) File Systems on Disks F1 Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 20 Number of Processes per LG 20 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG1 LG Model ASA SuperMicro SuperServer Number and Type Processors 2.4 GHz Intel Pentium 4 Xeon Memory Size 1024 MB Operating System Red Hat Linux 7.3, kernel 2.4.21 Compiler gcc 2.96 Compiler Options -O -DNO_T_TYPES -DUSE_INTTYPES Network Type Intel Pro/1000 Gigabit Ethernet, MTU = 1500 Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1 LG1 N1 F1 (db1:/V1, db2:/V1, .. db10:/V1, db1:/V1, db2:/V1, .. db10:/V1) N/A 2 LG1 N1 F1 (db1:/V2, db2:/V2, .. db10:/V2, db1:/V2, db2:/V2, .. db10:/V2) N/A 3 LG1 N1 F1 (db1:/V3, db2:/V3, .. db10:/V3, db1:/V3, db2:/V3, .. db10:/V3) N/A 4 LG1 N1 F1 (db1:/V4, db2:/V4, .. db10:/V4, db1:/V4, db2:/V4, .. db10:/V4) N/A 5 LG1 N1 F1 (db1:/V5, db2:/V5, .. db10:/V5, db1:/V5, db2:/V5, .. db10:/V5) N/A 6 LG1 N1 F1 (db1:/V6, db2:/V6, .. db10:/V6, db1:/V6, db2:/V6, .. db10:/V6) N/A 7 LG1 N1 F1 (db1:/V7, db2:/V7, .. db10:/V7, db1:/V7, db2:/V7, .. db10:/V7) N/A 8 LG1 N1 F1 (db1:/V8, db2:/V8, .. db10:/V8, db1:/V8, db2:/V8, .. db10:/V8) N/A 9 LG1 N1 F1 (db1:/V9, db2:/V9, .. db10:/V9, db1:/V9, db2:/V9, .. db10:/V9) N/A 10 LG1 N1 F1 (db1:/V10, db2:/V10, .. db10:/V10, db1:/V10, db2:/V10, .. db10:/V10) N/A 11 LG1 N1 F1 (db1:/V1, db2:/V1, .. db10:/V1, db1:/V1, db2:/V1, .. db10:/V1) N/A 12 LG1 N1 F1 (db1:/V2, db2:/V2, .. db10:/V2, db1:/V2, db2:/V2, .. db10:/V2) N/A 13 LG1 N1 F1 (db1:/V3, db2:/V3, .. db10:/V3, db1:/V3, db2:/V3, .. db10:/V3) N/A 14 LG1 N1 F1 (db1:/V4, db2:/V4, .. db10:/V4, db1:/V4, db2:/V4, .. db10:/V4) N/A 15 LG1 N1 F1 (db1:/V5, db2:/V5, .. db10:/V5, db1:/V5, db2:/V5, .. db10:/V5) N/A 16 LG1 N1 F1 (db1:/V6, db2:/V6, .. db10:/V6, db1:/V6, db2:/V6, .. db10:/V6) N/A 17 LG1 N1 F1 (db1:/V7, db2:/V7, .. db10:/V7, db1:/V7, db2:/V7, .. db10:/V7) N/A 18 LG1 N1 F1 (db1:/V8, db2:/V8, .. db10:/V8, db1:/V8, db2:/V8, .. db10:/V8) N/A 19 LG1 N1 F1 (db1:/V9, db2:/V9, .. db10:/V9, db1:/V9, db2:/V9, .. db10:/V9) N/A 20 LG1 N1 F1 (db1:/V10, db2:/V10, .. db10:/V10, db1:/V10, db2:/V10, .. db10:/V10) N/A =============================================================================== Notes and Tuning <> Configuration: <> The Panasas ActiveScale storage cluster under test was comprised of 45 StorageBlades and 10 DirectorBlades joined over a Gigabit Ethernet network. <> Each DirectorBlade contained one processor, 4 GB memory, a Gigabit Ethernet port, and a local disk. <> Each DirectorBlade provided metadata management to the storage cluster and NFS access to the Panasas ActiveScale filesystem (PanFS). DirectorBlades store filesystem data and metadata on StorageBlades using the Object-based Storage Device (OSD) protocol which is transported over iSCSI over Gigabit Ethernet. <> Each StorageBlade contained one processor, 512 MB memory, a Gigabit Ethernet port, and two local disks (RAID 0) storing filesystem objects and running the Object-based Storage Device (OSD) protocol module over Gigabit Ethernet. Total over entire 45 StorageBlades, the storage subsystem contained 45 processors, 22.5 GB memory, and 45 Gigabit Ethernet ports. <> All load generators, DirectorBlades and StorageBlades were interconnected via an Extreme BlackDiamond 6816 with Gigabit Ethernet and standard 1500 byte frames. <> Each rack-mounted Panasas Shelf contained 9 StorageBlades, 2 DirectorBlades, a UPS (redundant power supplies, and a battery), and a Gigabit Ethernet switch card that linked the 11 blade slots to one Gigabit Ethernet uplink. <> The cluster was configured as a single Bladeset grouping all 45 StorageBlades providing the ability to recover from any single StorageBlade failure in the Bladeset. Each file is split into objects, 10 objects maximum, one object per StorageBlade. RAID 5 is computed over the objects that comprise the file. <> Each virtual volume stores objects on any StorageBlade in the Bladeset as needed. <> The cluster was configured with 10 virtual volumes. Each virtual volume was a subdirectory of the single namespace mapped under the system root (/V1, /V2 .. /V10) <> Each DirectorBlade had the ability to serve any of the virtual volumes over NFS and was the owner of one virtual volume. <> <> Stable Storage: <> All memory in the DirectorBlades were used by the system for general purpose memory and dynamically-sized data cache. <> There was an integral UPS for each Panasas Shelf which provides sufficient power in the event of AC power loss to flush the DirectorBlade and StorageBlade caches to their local disks. Recovery software in the DirectorBlade is able to recover all committed data and metadata operations when power is restored. <> <> Uniform Access Requirements (UAR): <> Each client C[i] uniformly mounted virtual volume V[j] (j=i mod number_of_volumes) from all DirectorBlades, 2 processes per virtual volume. <> This mounting pattern assured that all DirectorBlades and all StorageBlades observed uniform load from all clients, consistent with UAR rules applied to prior multiple node SPECsfs97_R1 reports. However, a higher standard has been set for single namespace clusters: any node in a single namespace cluster can serve data and metadata owned by or attached only to any other server node, and UAR for these clusters requires that the fraction of processes accessing data owned by or attached only to the local node (in which the serving node and the owning node are the same node) is the reciprocal of the number of server nodes. <> This mounting pattern is consistent with the single namespace UAR requirement. Mounting each virtual volume through all DirectorBlades additionally assured uniform cluster internal load as each DirectorBlade owned and managed one virtual volume, 1/10'th of the metadata communication was local and 9/10'th of the metadata communication was spread uniformly to all other DirectorBlades. Moreover, because all DirectorBlades in an ActiveScale storage cluster have direct connectivity to all StorageBlades, all data traffic is non-local and spread uniformly over all StorageBlades. <> <> Panasas, ActiveScale storage cluster, PanFS, DirectorBlade, and StorageBlade are trademarks of Panasas, Inc. =============================================================================== Generated on Wed Jan 7 13:28:27 EST 2004 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2002 Standard Performance Evaluation Corporation