SPEC SFS®2014_vdi Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

SPEC SFS(R) Subcommittee SPEC SFS2014_vdi = 20 Desktops
SPEC SFS(R) 2014 ITM-33 Reference Solution Overall Response Time = 2.28 msec


Performance

Business
Metric
(Desktops)
Average
Latency
(msec)
Desktops
Ops/Sec
Desktops
MB/Sec
21.0504005
41.11080011
61.350120017
81.850160023
102.140200029
122.440240035
142.700280041
162.820320046
183.280360052
204.570399958
Performance Graph


Product and Test Information

SPEC SFS(R) 2014 ITM-33 Reference Solution
Tested bySPEC SFS(R) Subcommittee
Hardware AvailableSeptember 2014
Software AvailableSeptember 2014
Date TestedSeptember 2014
License Number0
Licensee LocationsGainesville, VA USA

The SPEC SFS(R) 2014 Reference Solution consists of a TDV 2.0 micro-cluster appliance, based on Intel Avoton SoC nodes, connected to an ITM-33 storage cluster using the NFSv3 protocol over an Ethernet network.

The ITM-33 cluster, built on proven scale-out storage platform, provides IO/s from a single file system, single volume. The ITM-33 accelerates business and increases speed-to-market by providing scalable, high performance storage for mission critical and highly transactional applications. In addition, the single filesystem, single volume, and linear scalability of the BSD based operating system enables enterprises to scale storage seamlessly with their environment and application while maintaining flat operational expenses. The ITM-33 is based on enterprise-class 2.5" 10,000 RPM Serial Attached SCSI drive technology, 1GbE Ethernet networking, and a high performance Infiniband back-end. The ITM-33 is configured with 4 nodes in a single file system, single volume.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
14Storage Cluster NodeGenericITM-334 ITM-33-2U-Dual-24GB-4x1GE-7373GB nodes configured into a cluster.
21Micro-cluster ApplianceGenericTDV 2.0An appliance with 32 nodes and an internal network switch.
31Infiniband switchGeneric12200-18An 18 port QDR Infiniband switch.
41Ethernet switchGenericDCS-7150S-64-CLA 10 GbE capable network switch.

Configuration Diagrams

  1. Diagram 1

Component Software

Item NoComponentTypeName and VersionDescription
1ITM-33 Storage NodesOperating Systemcluster FS Version 7.0.2.1The ITM-33 nodes were running cluster FS 7.0.2.1
2Filesystem Software Licensecluster FScluster FS Version 7.0.2.1cluster FS 7.0.2.1 License
3Load GeneratorsOperating SystemCentOS 6.4 64-bitThe load generator nodes in the TDV 2.0 appliance with 32 nodes running CentOS 6.4 64-bit. Linux kernel 2.6.32-358.el6.x86_64

Hardware Configuration and Tuning - Physical

Load generator internal cluster switch
Parameter NameValueDescription
Port speed2.5 Gbinternal switch port speed

Hardware Configuration and Tuning Notes

The port speed on the internal switch for the load generators was set to 2.5Gb.

Software Configuration and Tuning - Physical

None
Parameter NameValueDescription
n/an/an/a

Software Configuration and Tuning Notes

No software tunings were used - default NFS mount options were used.

Service SLA Notes

No opaque services were in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1ITM-33 Storage Nodes: 600GB SAS 10k RPM Disk Drives16+2/2 Parity protected (default)Yes96
2Load Generator: 80GB SATA SSDNoneNo32
Number of Filesystems1(IFS->NFSv3)
Total Capacity26 TiB
Filesystem TypeIFS->NFSv3

Filesystem Creation Notes

The file system was created on the ITM-33 cluster by using all default parameters.

The file system on the load generating clients was created at OS install time using default parameters with an ext4 file system.

Storage and Filesystem Notes

The storage in the load generating clients did not contribute to the performance of the solution under test, all benchmark data was stored on the IFS file system and accessed via NFSv3.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
11 GbE ITM-33 Storage Nodes4ITM-33 front end Ethernet ports, configured with MTU=1500
2QDR Infiniband ITM-33 Storage Nodes4ITM-33 back end Infiniband ports, running at QDR
310 GbE Load generator benchmark network1Load generator benchmark network Ethernet ports, configured with MTU=1500

Transport Configuration Notes

All NFSv3 benchmark traffic flowed through the DCS-7150S-64-CL Ethernet switch.

All load generator clients were connected to an internal switch in the micro-cluster appliance via 2.5GbE. This internal switch was connected to the 10 GbE switch.

The Infiniband network is part of the ITM-33 storage node architecture, and is not configured directly by the user. It carries inter-node cluster traffic. Since it is not configurable by the user it is not included in the switch list.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1DCS-7150S-64-CL Ethernet Switch10 GbE Ethernet load generator to storage interconnect5247No uplinks used

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
132CPUload generatorsIntel Atom C2550 2.4GHz Quad-core CPUload generator network, nfs client
28CPUITM-33 storage nodesIntel(R) Xeon(R) CPU E5-2620 v2 @ 2.4GHz Quad-core CPUITM-33 Storage node networking, NFS, file system, device drivers

Processing Element Notes

Each ITM-33 storage node has 2 physical processors with 4 cores and SMT enabled.

Each load generator had a single physical processor.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
ITM-33 Storage Node System Memory244V96
ITM-33 Storage Node Integrated NVRAM module with Vault-to-Flash0.54NV2
Load generator memory832V256
Grand Total Memory Gibibytes354

Memory Notes

Each storage controller has main memory that is used for the operating system and for caching filesystem data. A separate, integrated battery-backed RAM module is used to provide stable storage for writes that have not yet been written to disk.

Stable Storage

Each ITM-33 storage cluster node is equipped with an nvram journal that stores writes to the local disks. The nvram has backup power to save data to dedicated on-card flash in the event of power-loss.

The load generating clients and all components between them and the storage cluster are not stable storage and are not configured as such, and thus will comply with protocol requirements for stable storage.

Solution Under Test Configuration Notes

The system under test consisted of 4 ITM-33 storage nodes, 2U each, connected by QDR Infiniband. Each storage node was configured with a single 10GbE network interface connected to a 10GbE switch. There were 32 load generating clients, each connected to the same Ethernet switch as the ITM-33 storage nodes.

Other Solution Notes

None

Dataflow

Each load generating client mounted the single file system using NFSv3. Because there were four ITM-33 nodes, the first eight clients mounted the file system from the first ITM-33 node, the next eight clients mounted the file system from the second ITM-33 node, and so on. The order of the clients as used by the benchmark was round-robin distributed such that as the load scaled up, each additional process used the next node in the ITM-33 cluster. This ensured an even distribution of load over the network and among the ITM-33 nodes.

Other Notes

None

Other Report Notes

None


Generated on Wed Mar 13 16:54:23 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation