SPECsfs2008_nfs.v3 Result

Hitachi Data Systems : Hitachi Unified Storage File, Model 4040, Dual Node Cluster
SPECsfs2008_nfs.v3 = 131071 Ops/Sec (Overall Response Time = 2.77 msec)


Performance

Throughput
(ops/sec)
Response
(msec)
13048 1.4
26117 1.7
39204 2.0
52270 2.3
65417 2.6
78530 3.0
91795 3.4
104795 3.9
118071 4.5
131071 5.7
Performance Graph


Product and Test Information

Tested By Hitachi Data Systems
Product Name Hitachi Unified Storage File, Model 4040, Dual Node Cluster
Hardware Available February 2014
Software Available April 2014
Date Tested April 2014
SFS License Number 276
Licensee Locations Santa Clara, CA, USA

The Hitachi Unified Storage (HUS) and Hitachi NAS (HNAS) Platform family of products provide multiprotocol support to store and share block, file and object data types. The HNAS 4000 series delivers best-in-class performance, scalability, clustering with automated failover, 99.999% availability, non-disruptive upgrades, smart primary deduplication, intelligent file tiering, automated migration, 256TB file system pools, a single namespace up to the maximum usable capacity, and are integrated with the Hitachi Command suite of management and data protection software. The Hitachi NAS file module uses a hardware-accelerated "Hybrid Core" architecture that accelerates network and file protocol processing to achieve the industry's best performance in terms of both throughput and Operations per second. The file module also uses an object-based file system (Silicon File System) and virtualization to deliver the highest scalability in the market, enabling organizations to consolidate file servers and other NAS devices into fewer nodes and storage arrays for simplified management, improved space efficiency and lower energy consumption. The 4040 cluster can scale up to 4PB of usable data storage and support 10GbE LAN access.

Configuration Bill of Materials

Item No Qty Type Vendor Model/Name Description
1 2 Server HDS 060-100302-01.P Hitachi NAS 4040 Base System
2 1 Server HDS SX345390.P System Management Unit 400 (SMU)
3 2 Software HDS 044-231440-03.P HNAS 4040 - Entry SW Bundle
4 8 FC Interface HDS FTLF8524P2BNV.P SFP 4Gbps FC
5 8 Network Interface HDS FTLX8512D3BCL.P 10GbE XFP
6 1 Storage HDS HUS-SOLUTION.S Hitachi Unified Storage System
7 2 Storage HDS DF-F850-CTLL.P HUS 150 Controller
8 1 Chassis HDS HDF850-CBL.P HUS 150 Base Controller Box
9 4 Cache HDS DF-F850-8GB.P HUS 150 8GB Cache Module
10 1 License HDS 044-230199-03.P HUS 150 Base Operating System M License
11 260 Drives HDS DF-F850-9HGSS.P HUS 900GB SAS 10K RPM HDD SFF for CBSS/DBS-Base
12 11 Drive Chassis HDS DF-F850-DBS.P HUS Drive Box - SFF 2U x 24
13 4 FC Interface HDS DF-F850-HF8G.P HUS 150 4x8Gbps FC Interface Adapter
14 1 Rack HDS A3BF-AMS-US.P AMS 19 in rack Americas MIN
15 2 Switch Brocade Brocade 6520 Brocade 6520 FC switch

Server Software

OS Name and Version 11.3.3450.10
Other Software None
Filesystem Software SiliconFS 11.3.3450.10

Server Tuning

Name Value Description
security-mode UNIX Security mode is native UNIX
cifs_auth off Disable CIFS security authorization
cache-bias small-files Set metadata cache bias to small files
fs-accessed-time off Accessed time management was turned off
shortname off Disable short name generation for CIFS clients
read-ahead 0 Disable file read-ahead

Server Tuning Notes

None

Disks and Filesystems

Description Number of Disks Usable Size
900GB 10K RPM SAS disks 260 166.5 TB
160GB SATA Disks. These four drives (two mirrored drives per node) are used for storing the core operating system of the file module and management logs. No cache or data storage. 4 320.0 GB
Total 264 166.8 TB
Number of Filesystems 4
Total Exported Capacity 166.4TB
Filesystem Type WFS-2
Filesystem Creation Options 4KB filesystem block size
Filesystem Config Each filesystem was striped across 13 x 4D+1P RAID-5 LUNs (65 HDDs).
Fileset Size 15234.2 GB

The storage configuration consisted of a Hitachi Unified Storage, model 150, system (HUS 150). The storage system was configured with dual symmetric active-active controllers and 32GB cache memory. There were 260 900GB 10K RPM SAS drives in use for these tests. There were 52 LUNs created using RAID-5, 4D+1P. There were eight 4Gbps FC ports in use across two controllers. The FC ports were connected to the 4040 nodes via a redundant pair of Brocade 6520 switches. The 4040 nodes were connected to each Brocade 6520 switch via four 4Gbps FC connections, such that a completely redundant path exists from each node to the storage. Each Hitachi NAS file module node has two internal mirrored hard disk drives that are used to store the core operating software and system logs. These drives are not used for cache space or for storing data.

Network Configuration

Item No Network Type Number of Ports Used Notes
1 10 Gigabit Ethernet 2 Integrated 10GbE Ethernet controller

Network Configuration Notes

One 10GbE network interface from each 4040 node was connected to a Hitachi Apresia 15000-64XL-PSR switch, which provided network connectivity to the clients. The interfaces were configured to use Jumbo frames.

Benchmark Network

Each LG has a dual port Intel X520-SR2 PCIe 10GbE Server Adapter. But each LG connects only via a single 10GbE connection to the ports on the Hitachi Apresia 15000-64XL-PSR switch.

Processing Elements

Item No Qty Type Description Processing Function
1 4 FPGA Altera Stratix III EP3SE260 Storage Interface, Filesystem
2 4 FPGA Altera Stratix III EP3SL340 Network Interface, NFS, Filesystem
3 2 CPU Intel E8400 3.0GHz, Dual Core Management
4 2 CPU Intel Xeon LC3528 Dual-Core processor HUS 150 host I/O Management
5 2 ASIC Hitachi Custom ASIC HUS 150 I/O Data Engine

Processing Element Notes

Each 4040 node has 2 FPGAs of each type (4 in total) that are used for processing functions. Each HUS 150 storage system controller has an Intel Xeon LC3528 Dual-Core processor and Hitachi custom ASIC for I/O processing.

Memory

Description Size in GB Number of Instances Total GB Nonvolatile
Server Main Memory 8 2 16 V
Server Filesystem and Storage Cache 22 2 44 V
Server Battery backed NVRAM 2 2 4 NV
Cache Memory Module (HUS150) 8 4 32 NV
Grand Total Memory Gigabytes     96  

Memory Notes

Each 4040 node has 8GB of main memory that is used for the operating system and in support of the FPGA functions. 22GB of memory is dedicated to filesystem metadata, sector cache and for other purposes. A separate, integrated battery-backed NVRAM module (2GB) on the filesystem board is used to provide stable storage for writes that have not yet been written to disk. The HUS 150 storage system was configured with 32GB Memory.

Stable Storage

The Hitachi NAS File module writes to the battery based (72 hours) NVRAM internal to the Server first. The data from NVRAM is then written to the backend storage system at the earliest opportunity, but always within a few seconds of arrival in the NVRAM. In an active-active cluster configuration, the contents of the NVRAM are synchronously mirrored to ensure that in the event of a single node failover, any pending transactions can be completed by the remaining node. The data from the node is then written onto the battery backed backend storage system cache (a second layer of NVRAM in the entire solution) and are backed up onto the Cache Flash Memory modules (32GB on each controller, total 64GB) in the event of a power outage. The Cache Flash Memory modules in the backend storage system are part of the total solution, but are used only during a power outage and never used as cache space.

System Under Test Configuration Notes

The system under test consisted of two Hitachi NAS File Module 4040 nodes, connected to an HUS 150 storage system via two Brocade 6520 FC switches. The nodes were configured in an active-active cluster mode and are connected by a redundant pair of 10GbE connections to the cluster interconnect ports.

The HUS 150 storage system consisted of 260 900GB 10K RPM SAS drives. All the connectivity from server to the storage was via two 4Gbps switched FC fabric. For these tests, there were 4 zones created on each FC switch. Each 4040 server was connected to each zone via 1 integrated 4Gbps FC ports (corresponding to 2 H-ports). The HUS 150 storage system was connected to 4 zones (corresponding to 8 FC ports) providing I/O path from the server to storage. The System Management Unit (SMU) is part of the total system solution, but is used for management purposes only and was not active during the test.

Other System Notes

None

Test Environment Bill of Materials

Item No Qty Vendor Model/Name Description
1 4 Hitachi Compute Blade 2000 E55R3 RHEL 6.4 clients, Sixteen Physical Cores, 64GB Memory
2 1 Hitachi Apresia Hitachi Apresia 15000-64XL-PSR 10GbE Switch

Load Generators

LG Type Name LG1
BOM Item # 1
Processor Name Intel Xeon E5-2690
Processor Speed 2.9 GHz
Number of Processors (chips) 2
Number of Cores/Chip 8
Memory Size 64 GB
Operating System RedHat Enterprise Linux 6.4, 2.6.32-358.el6 kernel.
Network Type 1 x Intel X520-SR2 PCIe 10GbE Server Adapter

Load Generator (LG) Configuration

Benchmark Parameters

Network Attached Storage Type NFS V3
Number of Load Generators 4
Number of Processes per LG 176
Biod Max Read Setting 2
Biod Max Write Setting 2
Block Size 64

Testbed Configuration

LG No LG Type Network Target Filesystems Notes
15..18 LG1 1 /w/d0, /w/d1, /w/d2, /w/d3 None

Load Generator Configuration Notes

All the target filesystems from each node were accessed by all the clients.

Uniform Access Rule Compliance

All the filesystems from each node were mounted on all the clients. Each load generating client hosted 176 processes, accessing all the 4 target file systems (/w/d0, /w/d1, /w/d2, /w/d3).

Other Notes

Other test notes: None

Hitachi Unified Storage, Hitachi Unified Storage VM, Hitachi NAS Platform and Virtual Storage Platform are registered trademarks of Hitachi Data Systems, Inc. in the United States, other countries, or both. All other trademarks belong to their respective owners and should be treated as such.

Config Diagrams


Generated on Wed May 28 12:12:30 2014 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation

First published at SPEC.org on 27-May-2014