SPECsfs2008_nfs.v3 Result ================================================================================ EMC Corporation : EMC VNX8000 Unified Storage System, 8 X-Blades (including 1 stdby) SPECsfs2008_nfs. = 580796 Ops/Sec (Overall Response Time = 0.78 msec) v3 ================================================================================ Performance =========== Throughput Response (ops/sec) (msec) --------------------- ---------------- 60042 0.3 120192 0.4 180570 0.4 240788 0.5 301347 0.6 362369 0.7 422882 0.9 483790 1.3 549007 1.9 580796 3.2 ================================================================================ Product and Test Information ============================ Tested By EMC Corporation Product Name EMC VNX8000 Unified Storage System, 8 X-Blades (including 1 stdby) Hardware Available August 2013 Software Available August 2013 Date Tested August 2013 SFS License Number 47 Licensee Locations Hopkinton, MA The EMC VNX8000 unified storage system with MCx multi-core optimization delivers the highest levels of performance, capacity efficiency, data protection, and ease of use for file- and block-based applications. The configuration tested here consists of an EMC VNX8000 Unified Storage Array with seven active X-Blades and one standby X-Blade. Configuration Bill of Materials =============================== Item Vendo No Qty Type r Model/Name Description ---- --- ---- ----- ---------- ----------- 1 2 Rack EMC VNXBRACK-40U VNXB 40U RACK WITH FRONT PANEL 2 1 Enclosure EMC VNXB80SPE VNX8000 SPE-EMC RACK 3 1 Disk EMC V-V4-230015 VNX 300GB 15K VAULT 25x2.5 DPE/DAE 4 1 Disk EMC V4-2S15-300 VNX 300GB 15K SAS 25x2.5 DPE/DAE 5 544 Disk EMC V4-2S6F-200 VNX 200GB SSD 25x2.5 DPE/DAE 6 4 Enclosure EMC VNXB6GSDAE25P VNXB 25x2.5 6G SAS PRI DAE-EMC RACK 7 18 Enclosure EMC VNXB6GSDAE25 VNXB 25x2.5 6G SAS EXP DAE-EMC RACK 8 2 SLIC EMC VSPBEXPSASBEA VNXB SAS BE EXP 4 BUSES 9 6 SLIC EMC VSPBM8GFFEA VNXB 4 PORT 8G FC IO MODULE PAIR 10 1 Enclosure EMC VNXB80DME VNX8000 DME: 2 DM+FC SLIC-EMC RACK 11 3 Enclosure EMC VNXB80DMEX EMPTY VNX80 DM ENCLOSURE-EMC RACK 12 6 X-Blade EMC VNXB80DM VNX8000 ADD ON DM+FC SLIC-EMC RACK 13 1 Control EMC VNXBCS VNXB CONTROL STATION-EMC RACK Station 14 8 SLIC EMC VDMBMXG2OPA VNXB 10GBE 2 OP MODULE (2 SFP+) 15 1 Software EMC VNXOE-8000 VNX8000 Operating Environment 16 1 Software EMC UNISU-VNX8000 VNX8000 Unisphere Unified Suite 17 1 Software EMC EXPSAS-V80 Unisphere VNX8000 SAS EXP Software 18 1 Switch Cisco Nexus 5596 Cisco Nexus 10GbE IP Switch Server Software =============== OS Name and Version VNX File OE 8.1.0.15 Other Software EMC VNX Control Station Linux 2.6.18-348.1.1.8000.EMC Filesystem Software VNX UxFS File System Server Tuning ============= Name Value Description ---- ----- ----------- ufs updateAccTime 0 Disable access time updates file dnlcNents 4608000 Directory name lookup cache size Server Tuning Notes ------------------- None Disks and Filesystems ===================== Description Number of Disks Usable Size ----------- --------------- ----------- 2.5" 200GB 6Gbps SAS flash 544 75.2 TB 2.5" 300GB SAS 15K RPM drive 5 105.0 GB Total 549 75.3 TB Number of Filesystems 21 Total Exported Capacity 75842 GiB Filesystem Type UxFS Filesystem Creation "server_mount server_x -o noprefetch" - disable Options prefetching for the mounted file system Filesystem Config File systems fs0 through fs20 are each striped across 20 LUNs with 256 KB element size. Three file systems were mounted and exported from each X-Blade. Fileset Size 70318.9 GB 525 of the flash drives are for data, 19 are for hot spares. The flash data drives were configured as 5 drive RAID groups, for a total of 105 RAID groups. Each RAID group hosted 4 4+1 RAID5 LUNs for a total of 420 LUNs, 210 LUNs owned by SPA and 210 owned by SPB. The file systems were created in a manner that used almost the entire usable capacity (after RAID implementation) of the flash data drives. After completion of the benchmark, each of the file systems was 94% full. The benchmark required this number of flash drives be installed in order to attain successful completion due to capacity requirements for the benchmark. The 5 SAS drives were used for the VNX vault and control volumes for all X-Blades - 4 SAS drives for the vault and 1 SAS drive as a hot spare. The 105 GiB usable size reported for the SAS drives is the amount of space on the drives used by the X-Blades' control LUNs. The remainder of the space on the drives is used by the vault and is hidden from the X-Blades and the end users, thus no user space was available on the SAS drives. Each client mounted 3 file systems per X-Blade through each network interface of the VNX. Each client mounted a total of 21 file systems. Network Configuration ===================== Item Network Number of No Type Ports Used Notes ---- ---------- ---------- ----- 1 Jumbo 16 There are 2 10GbE network interfaces in use per 10GbE active X-Blade, both ports on the single 10GbE SLIC installed in the X-Blade. Network Configuration Notes --------------------------- All 10GbE network interfaces were connected to a Cisco Nexus 5596 switch. Benchmark Network ================= An MTU size of 9000 was set for all connections to the switch. Each X-Blade was connected to the network via 2 ports. The X-Blade network ports were used independently - they were not bonded. The LG1 class workload machines were connected with one port each. Processing Elements =================== Ite m Typ Processing No Qty e Description Function --- --- --- ----------- ------------------ 1 7 CPU Single socket Intel six core Westmere (Xeon VNX8000 X-Blades X5660) 2.8 GHz with QPI speed of 6.4 GT/s for (NFS protocol, each X-Blade server. 1 chip active for the UxFS file system) workload. (1 Processor in the standby X-Blade is not in the quantity.) 2 4 CPU Dual socket Intel eight core Sandy Bridge (Xeon VNX8000 Storage E5-2680) 2.7 GHz with QPI speed of 8.0 GT/s for Processors (SCSI, each VNX8000 SP. FC, SAS, RAID) Processing Element Notes ------------------------ Each X-Blade has one physical processor. There are 7 active X-Blades for a total of 7 physical processors. Each SP has two physical processors with 8 cores each. The control station listed in the BOM contains a processor that is not counted for in the list of processors here. The control station is for management only. There is no function of the control station that is in the workload's data path. Memory ====== Numbe Size r of Nonv in Insta Tota olat Description GB nces l GB ile ----------- ---- ----- ---- ---- Each X-Blade main memory. (24 GB in the standby X-Blade not 24 7 168 V in the quantity) EMC VNX8000 storage array battery backed memory. 128 GB per 128 2 256 NV EMC VNX8000 SP. The EMC VNX8000 SP memory is used for the VNX OE as well as caching operations. Grand Total Memory Gigabytes 424 Memory Notes ------------ There is sufficient battery power to safely destage all cached data to the EMC VNX8000 vault drives in the event of a power failure. The SPs are shut down in an orderly fashion when destaging is complete. Stable Storage ============== 21 NFS file systems were used. Each file system was striped over 20 LUNs. Each VNX8000 X-Blade had 4 Fibre Channel connections directly to the VNX8000 backend array. In this configuration, NFS stable write and commit operations are not acknowledged until after the EMC VNX8000 storage array has acknowledged that the related data has been stored in stable storage (i.e. battery backed memory or disk). System Under Test Configuration Notes ===================================== The system under test consisted of 7 active VNX8000 X-Blades, each directly attached by 4 FC links to the VNX8000 storage array. Of these 4 FC links per X-Blade, 2 were connected to SP A, and 2 were connected to SP B. Each of the two FC links from each X-Blade to each SP were connected to different FC SLICs on the SP. Each SP had 4 FC SLICs for X-Blade connectivity, the other FC SLICs are reserved for block-only/non-NAS access. The X-Blades were running VNX File OE 8.1.0.15. 2 10GbE Ethernet ports per X-Blade were connected to the network. Other System Notes ================== Failover is supported by an additional VNX X-Blade that operates in standby mode. In the event of any of the 7 active X-Blades failing, the standby unit takes over the function of the failed unit. The standby X-Blade does not contribute to the performance of the system and it is not included in the active components listed above. Test Environment Bill of Materials ================================== Item No Qty Vendor Model/Name Description ------ --- ------ ---------- ----------- 1 18 Cisco Cisco C240 M3 Cisco server with 128 GB RAM and the Linux operating system Load Generators =============== LG Type Name LG1 BOM Item # 1 Processor Name Intel(R) Xeon(TM) E5-2620 Processor Speed 2.0 GHz Number of Processors (chips) 2 Number of Cores/Chip 6 Memory Size 128 GB Operating System CentOS 6.3 Linux 2.6.32-279.el6.x86_64 Network Type 1 x Intel X520 10GbE Load Generator (LG) Configuration ================================= Benchmark Parameters -------------------- Network Attached Storage Type NFS V3 Number of Load Generators 18 Number of Processes per LG 105 Biod Max Read Setting 8 Biod Max Write Setting 8 Block Size AUTO Testbed Configuration --------------------- LG No LG Type Network Target Filesystems Notes ----- ------- ------- ------------------ ----- 1..18 LG1 1 /fs0, ..., /fs20 N/A Load Generator Configuration Notes ---------------------------------- All file systems were mounted on all clients, which were connected to the same physical and logical network. Uniform Access Rule Compliance ============================== Each client has all file systems mounted from each active X-Blade. Other Notes =========== Failover is supported by an additional VNX X-Blade that operates in standby mode. In the event of the X-Blade failure, the standby unit takes over the function of any failed unit. The standby X-Blade does not contribute to the performance of the system and it is not included in the components listed above. The EMC VNX8000 was configured with 128 GB of memory per SP. The memory is backed up with sufficient battery power to safely destage all cached data onto the vault drives before shutting down the SP's in an orderly manner in the event of a power failure. ================================================================================ Generated on Tue Sep 03 18:31:10 2013 by SPECsfs2008 ASCII Formatter Copyright (C) 1997-2008 Standard Performance Evaluation Corporation