|101183 Ops/Sec (Overall Response Time = 1.66 msec)
|SFS License Number
The NetApp(R) FAS3270 is the highest performing mid-range platform with enhanced scalability and flexibility option in the new FAS3200 family of storage systems with NetApp's unified storage architecture. The FAS3200 series performance is driven by a 64-bit architecture that uses high throughput, low latency links and PCI Express for all internal and external data transfers. With the FAS3200 series and Data ONTAP 8.0.1 you can efficiently consolidate SAN, NAS, primary, and secondary storage on a single platform. Data ONTAP 8.0.1 is designed to provide customers with the next generation of features and functionality to ensure they are able to meet the demands of growing workloads. NetApp has designed systems to make them easy for you to install, configure, manage, and upgrade so you can quickly adapt your storage infrastructure to meet your changing business needs. You can minimize the use of data center resources including power, cooling, and floor space-by taking advantage of a comprehensive set of storage-saving software features in Data ONTAP like Deduplication and Thin Provisioning (FlexVols)
|SAS 4 port Host Adapter
|SAS Host Adapter X2065A-R6
|4 port SAS Host adapter X2065A-R6
|Disk Drives w/Shelf
|10 Gigabit Ethernet Adapter
|Qlogic Dual 10G Ethernet Controller X1139A-R6
|NFS Software License SW-3270-NFS
|NFS Software License SW-3270-NFS
|OS Name and Version
|Data ONTAP 8.0.1
|Data ONTAP 8.0.1
|vol options 'volume' no_atime_update
|Disable atime updates (applied to all volumes)
|Number of Disks
|450GB SAS 15K RPM Disk Drives
|Number of Filesystems
|Total Exported Capacity
|Filesystem Creation Options
|64-bit aggregate option was selected during creation of the aggregates that housed the SFS filesystems on each controller.
|Each filesystem was striped across 176 disks
The storage configuration consisted of 15 shelves, each wth 24 disks. (5 stacks of 3 shelves) Each of the 5 stacks were connected to 2 SAS ports on each storage controller. (8 ports from SAS cards, and 2 ports built-in SAS, on each head) Each storage controller was the primary owner of 176 disks, with 176 disks in those shelves placed into a single 64-bit aggregate. Each aggregate was composed of 11 RAID-DP groups, each RAID-DP group was composed of 14 data disks and 2 parity disks. Within each aggregate, a flexible volume (utilizing Data ONTAP FlexVol (TM) technology) was created to hold the SFS filesystem for that controller. Each volume was striped across all disks in the aggregate where it resided. Each controller was the owner of a single volume, but the disks in each aggregate were dual-attached so that, in the event of a fault, they could be managed by the other controller via an alternate loop. A separate flexible volume residing in a three-disk root aggregate on each controller was created to hold the Data ONTAP operating system and system files. The remaining disks owned by each controller were reserved for spares.
|Number of Ports Used
|Jumbo Frame 10 Gigabit Ethernet
|Dual-port 10 gigabit ethernet
There was a single port from the dual-port 10 gigabit ethernet network adapter configured on each storage controller. The active interfaces were configured to use jumbo frames (MTU size of 9000 bytes). All network interfaces were connected to a Cisco 5020 switch, which provided connectivity to the clients.
An MTU size of 9000 was set for all connections to the switch. Each load generator was connected to the network via a single 10 GigE port.
|3.0GHz Intel Xeon(tm) Processor E5240
|Networking, NFS protocol, WAFL filesystem, RAID/Storage drivers
Each storage controller has two physical processors, each with two cores.
|Size in GB
|Number of Instances
|Storage controller mainboard memory (18560MB)
|Non-volatile memory NVMEM (1920MB)
|Grand Total Memory Gigabytes
Each storage controller has main memory that is used for the operating system and for caching filesystem data. A separate, battery-backed RAM (NVMEM) is used to provide stable storage for writes that have not yet been written to disk.
The WAFL filesystem logs writes and other filesystem modifying transactions to the NVMEM. In an active-active configuration, as in the system under test, such transacations are also logged to the NVMEM on the partner storage controller so that, in the event of a storage controller failure, any transactions on the failed controller can be completed by the partner controller. Filesystem modifying NFS operations are not acknowledged until after the storage system has confirmed that the related data are stored in NVMEM of both storage controllers (when both controllers are active). The battery-backed NVMEM ensures that any uncommitted transactions are preserved for at least 72 hours.
The system under test consisted of two FAS3270 storage controllers and 15 storage shelves, each with 24 450GB SAS disk drives. The two controllers were executing Data ONTAP 8.0.1 software operating in 7G mode. They were configured in an active-active cluster failover configuration, using the high-availability cluster software option in conjunction with iWarp over built-in 10GigE cluster interconnect. One dual-port 10 gigabit ethernet host bus adapter was present in a PCI-e expansion slot on each storage controller. The storage shelves were connected to 2 of the 10 SAS ports on each of the storage controllers. The system under test was connected to a 10 gigabit ethernet switch via 2 network ports.
All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksumming, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.
|IBM 3650M2 with 16GB RAM and Linux operating system
|Cisco Catalyst 5020 Ethernet Switch
|LG Type Name
|BOM Item #
|Intel Xeon X5570
|Number of Processors (chips)
|Number of Cores/Chip
|RHEL5.3 Kernel 2.6.18-128.el5
|10 Gigabit Ethernet
|Network Attached Storage Type
|Number of Load Generators
|Number of Processes per LG
|Biod Max Read Setting
|Biod Max Write Setting
All filesystems were mounted on all clients, which were connected to the same physical and logical network.
Each load-generating client hosted 44 processes. The assignment of processes to filesystems and network interfaces was done such that they were evenly divided across all filesystems and network paths to the storage controllers. The filesystem data was striped evenly across all disks and SAS adapters and on to the storage backend.
Other test notes: None.
NetApp is a registered trademark and "Data ONTAP", "FlexVol", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such.
Generated on Mon Feb 28 16:16:31 2011 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 02-Nov-2010