SPECsfs2008_nfs.v3 Result ================================================================================ Hitachi Data : Hitachi Virtual Storage Platform G1000, File Model 4100, Systems 8 Node Cluster SPECsfs2008_nfs.v3 = 1222089 Ops/Sec (Overall Response Time = 0.75 msec) ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 122542 0.3 245273 0.3 368143 0.3 491136 0.4 613918 0.4 737914 0.5 860669 0.7 984360 0.8 1107794 1.0 1222089 6.0 ================================================================================ Product and Test Information ============================ Tested By Hitachi Data Systems Product Name Hitachi Virtual Storage Platform G1000, File Model 4100, 8 Node Cluster Hardware Available July 2013 Software Available February 2014 Date Tested March 2014 SFS License Number 276 Licensee Locations Santa Clara, CA, USA The Hitachi Virtual Storage Platform (VSP), Hitachi Unified Storage (HUS) and Hitachi NAS (HNAS) Platform family of products provide multiprotocol support to store and share block, file and object data types. The HNAS 4000 series delivers best-in-class performance, scalability, clustering with automated failover, 99.999% availability, non-disruptive upgrades, smart primary deduplication, intelligent file tiering, automated migration, 256TB file system pools, a single namespace up to the maximum usable capacity, and are integrated with the Hitachi Command suite of management and data protection software. The Hitachi NAS file module uses a hardware-accelerated "Hybrid Core" architecture that accelerates network and file protocol processing to achieve the industry's best performance in terms of both throughput and Operations per second. The file module also uses an object-based file system (Silicon File System) and virtualization to deliver the highest scalability in the market, enabling organizations to consolidate file servers and other NAS devices into fewer nodes and storage arrays for simplified management, improved space efficiency and lower energy consumption. The 4100 cluster can scale up to 8 nodes and 16PB of usable data storage and support 10GbE LAN access and 8Gbps FC storage connectivity. The Hitachi Virtual Storage Platform (VSP) G1000 is the industry's first enterprise-class unified storage platform which delivers flash-optimized Storage Virtualized Operating System (SVOS) software and patented Hitachi Accelerated Flash arrays to provide a high return on investment, security and quality of service to applications. Utilizing external storage virtualization and automated tiering, VSP G1000 centralizes storage management of multiple storage tiers for the highest economic value. The VSP G1000 all flash storage system, when combined with the Hitachi NAS file module, delivers industry-leading performance and scalability, while dramatically lowering response times. Configuration Bill of Materials =============================== Ite m No Qty Type Vendor Model/Name Description --- --- ---- ------ ---------- ----------- 1 8 Server HDS SX345384.P Hitachi NAS 4100 Base System 2 1 Server HDS SX345278.P System Management Unit (SMU) 3 1 Software HDS SX365131.P Hitachi NAS SW - Software Bundle 4 32 FC HDS FTLF8528P3BNV.P SFP+ 8Gbps FC Interface 5 48 Network HDS FTLX8571D3BCV.P SFP+ 10GE Interface 6 1 Storage HDS VSP-Solution Virtual Storage Platform G1000 7 1 Storage HDS DKC810I-CBXA.P Primary Controller Chassis 8 1 Chassis HDS DKC-F810I-CBXB.P Secondary Controller Chassis 9 64 Cache HDS DKC-F810I-CM16G.P 16GB Cache Memory Module 10 8 Cache HDS DKC-F810I-BMM128.P 128GB Cache Flash Memory Module (used only during power outage) 11 128 Drives HDS DKC-F810I-1R6FM.P 1.6TB Flash Module Drive 12 4 Drive HDS DKC-F810I-FBX.P Flash Module Drive Chassis Chassis 13 8 Processor HDS DKC-F810I-MP.P Virtual Storage Director (4 pairs) Blade 14 8 FC HDS DKC-F810I-16FC8.P 16-Port 8Gbps FC Host Adapter (4 Interface pairs) 15 8 Disk HDS DKC-F810I-SCA.P Disk Adapter (4 pairs) Adapter 16 2 Rack HDS DKC-F810I-RK42.P Rack Frame 42U 17 2 Switch Brocad Brocade 6520 Brocade 6520 FC switch e 18 2 Switch Brocad VDX6730 Brocade VDX6730 10GbE switch for e Cluster Interconnect Server Software =============== OS Name and Version 11.3.3434.03 Other Software None Filesystem Software SiliconFS 11.3.3434.03 Server Tuning ============= Name Value Description ---- ----- ----------- security-mode UNIX Security mode is native UNIX cifs_auth off Disable CIFS security authorization cache-bias small- Set metadata cache bias to small files files fs-accessed-time off Accessed time management was turned off shortname off Disable short name generation for CIFS clients read-ahead 0 Disable file read-ahead Server Tuning Notes ------------------- None Disks and Filesystems ===================== Number of Usable Description Disks Size ----------- ------- --------- 1.6TB Hitachi Accelerated Flash module drive 128 179.1 TB 250GB SATA Disks. These sixteen drives (two mirrored drives 16 2.0 TB per node) are used for storing the core operating system of the file module and management logs. No cache or data storage. Total 144 181.1 TB Number of Filesystems 16 Total Exported Capacity 179.2TB Filesystem Type WFS-2 Filesystem Creation Options 4KB filesystem block size Filesystem Config Each filesystem was striped across 8 LUNs from a single 7D+1P, RAID-5 Group consisting of 8 Flash module drives. Fileset Size 143572.8 GB The storage configuration consisted of a Hitachi Virtual Storage Platform (VSP), Model G1000 All Flash storage system (VSP G1000). The storage system was configured with two controller chassis, four Virtual Storage Director pairs and 1TB cache memory. There were a total 128 1.6TB Hitachi Accelerated Flash module drives in use to meet the capacity and performance requirements of the benchmark. There were 128 LDEVs created using RAID-5, 7D+1P. There were thirty two 8Gbps FC ports in use across four FED pairs from the VSP G1000. The FC ports were connected to the 4100 nodes via a redundant pair of Brocade 6520 switches. The 4100 nodes were connected to each Brocade 6520 switch via two 8Gbps FC connections, such that a completely redundant path exists from each node to the storage. Each Hitachi NAS file module nodes have two internal mirrored hard disk drives that are used to store the core operating software and system logs. These drives are not used for cache space or for storing data. Network Configuration ===================== Number of Ports Item No Network Type Used Notes ------- ------------ ----------------- ----- 1 10 Gigabit Ethernet 8 Integrated 10GbE Ethernet controller Network Configuration Notes --------------------------- One 10GbE network interface from each 4100 node was connected to a Hitachi Apresia 15000-64XL-PSR switch, which provided network connectivity to the clients. The interfaces were configured to use Jumbo frames. Benchmark Network ================= Each LG has a dual port Intel X520-SR2 PCIe 10GbE Server Adapter. But each LG connects only via a single 10GbE connection to the ports on the Hitachi Apresia 15000-64XL-PSR switch. Processing Elements =================== Item No Qty Type Description Processing Function ---- --- ---- ----------- ------------------- 1 8 FPGA Altera Stratix IV EP4SE530 Filesystem 2 24 FPGA Altera Stratix IV EP4SGX360 Network Interface, Storage Interface, NFS 3 8 CPU Intel Xeon Quad-Core 3.4GHz Management (Hitachi NAS file Processor module) 4 8 CPU Intel Xeon Quad-Core 2.33GHz VSP G1000 I/O Management Processor 5 16 ASIC Hitachi Custom ASIC VSP G1000 data accelerators Processing Element Notes ------------------------ Each 4100 node has 4 FPGAs that are used for processing functions. The VSP G1000 storage system is equipped with four pairs of processor blades and each processor blade has an Intel Xeon Quad-Core CPU. Each Host Adapter and Disk Adapter in the VSP G1000 system is equipped with a custom Hitachi ASIC that are used for data acceleration. Memory ====== Size in Number of Description GB Instances Total GB Nonvolatile ----------- --------- ----------- -------- ----------- Server Main Memory 32 8 256 V Server Filesystem and Storage Cache 68 8 544 V Server Battery backed NVRAM 8 8 64 NV Cache Memory Module (VSP G1000) 16 64 1024 NV Grand Total Memory Gigabytes 1888 Memory Notes ------------ Each 4100 node has 32GB of main memory that is used for the operating system and in support of the FPGA functions. 68GB of memory is dedicated to filesystem metadata, sector cache and for other purposes. A separate, integrated battery-backed NVRAM module (8GB) on the filesystem board is used to provide stable storage for writes that have not yet been written to disk. The VSP G1000 storage system was configured with 1TB Memory. Stable Storage ============== The Hitachi NAS File module writes to the battery based (72 hours) NVRAM internal to the server first. The data from NVRAM is then written to the backend storage system at the earliest opportunity, but always within a few seconds of arrival in the NVRAM. In an eight node active-active cluster configuration, the contents of the NVRAM are synchronously mirrored (in a round robin fashion) to ensure that in the event of one or two node failover, any pending transactions can be completed by the remaining nodes. The data from the node is then written onto the battery backed backend storage system cache (a second layer of NVRAM in the entire solution) and are backed up onto the Cache Flash Memory modules (each 128GB) in the event of a power outage. The Cache Flash Memory modules in the backend storage system are part of the total solution, but are used only during a power outage and never used as cache space. System Under Test Configuration Notes ===================================== The system under test consisted of eight Hitachi NAS File 4100 nodes, connected to a VSP G1000 All Flash storage system via two Brocade 6520 FC switches. The nodes were configured in an active-active cluster mode and are connected by a redundant pair of 10GbE connections to the cluster interconnect ports via two Brocade VDX6730-24 10GbE switches (cluster interconnect switches). The VSP G1000 All Flash storage system consisted of four pairs of Virtual Storage Directors and 128 1.6TB Hitachi Accelerated Flash module drives. All the connectivity from server to the storage was via two 8Gbps switched FC fabric. For these tests, there were 2 zones created on each FC switch. Each 4100 server was connected to each zone via 2 integrated 8Gbps FC ports (corresponding to 2 FC ports). VSP G1000 storage system was connected to 2 zones (corresponding to 32 FC ports) providing I/O path from the server to storage. The System Management Unit (SMU) is part of the total system solution, but is used for management purposes only and was not active during the test. Other System Notes ================== None Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ----- --- ------ ---------- ----------- 1 16 Hitachi Compute Blade RHEL 6.4 clients, Sixteen Physical Cores, 2000 E55R3 64GB Memory 2 1 Hitachi Apresia Hitachi Apresia 15000-64XL-PSR 10GbE Switch Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel Xeon E5-2690 Processor Speed 2.9 GHz Number of Processors (chips) 2 Number of Cores/Chip 8 Memory Size 64 GB Operating System RedHat Enterprise Linux 6.4, 2.6.32-358.el6 kernel. Network Type 1 x Intel X520-SR2 PCIe 10GbE Server Adapter Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 16 Number of Processes per LG 448 Biod Max Read Setting 2 Biod Max Write Setting 2 Block Size 64 Testbed Configuration --------------------- LG Netw LG No Type ork Target Filesystems Notes ----- ----- ---- ------------------ ----- 10.. LG1 1 /w/d0, /w/d1, /w/d2, /w/d3, /w/d4, /w/d5, /w/d6, /w/d7, / None 25 w/d8, /w/d9, /w/d10, /w/d11, /w/d12, /w/d13, /w/d14, /w/ d15 Load Generator Configuration Notes ---------------------------------- All the target filesystems from each node were accessed by all the clients. Uniform Access Rule Compliance ============================== All the filesystems from each node were mounted on all the clients. Each load generating client hosted 448 processes, accessing all the 16 target file systems (/w/d0, /w/d1, /w/d2, /w/d3, /w/d4, /w/d5, /w/d6, /w/d7, /w/d8, /w/d9, /w/d10, / w/d11, /w/d12, /w/d13, /w/d14, /w/d15). Other Notes =========== Other test notes: None Hitachi Unified Storage, Hitachi Unified Storage VM, Hitachi NAS Platform and Virtual Storage Platform are registered trademarks of Hitachi Data Systems, Inc. in the United States, other countries, or both. All other trademarks belong to their respective owners and should be treated as such. ================================================================================ Generated on Wed Apr 23 01:03:48 2014 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation