SPEC SFS(R)2014_vda Result                            

 Quantum Corporation : Quantum StorNext 7.0.1 with F-Series Storage Nodes      
    SPEC SFS2014_vda = 7450 Streams (Overall Response Time = 0.90 msec)        

===============================================================================

                                  Performance                                  
                                  ===========                                  

  Business     Average                             
   Metric      Latency      Streams      Streams   
 (Streams)      (msec)      Ops/Sec       MB/Sec   
------------ ------------ ------------ ------------
    745          0.8          7455         3440    
    1490         0.8         14910         6870    
    2235         0.8         22365        10320    
    2980         0.8         29819        13758    
    3725         0.8         37275        17192    
    4470         0.8         44730        20643    
    5215         0.8         52185        24077    
    5960         0.9         59640        27539    
    6705         1.1         67095        30947    
    7450         1.7         74550        34391    



===============================================================================

                         Product and Test Information                          
                         ============================                          

+---------------------------------------------------------------+
|      Quantum StorNext 7.0.1 with F-Series Storage Nodes       |
+---------------------------------------------------------------+
           Tested by  Quantum Corporation                     
  Hardware Available  January 2021                            
  Software Available  January 2021                            
         Date Tested  January 2021                            
      License Number  4761                                    
  Licensee Locations  Mendota Heights Minnesota               

StorNext File System (SNFS) is a software platform designed to manage massive 
amounts of data throughout its lifecycle, delivering the required balance of 
high performance, data protection and preservation, scalability, and cost. 
StorNext was designed specifically for large unstructured data sets, including 
video workloads, where low latency, predictable performance is required. SNFS 
is a scale-out parallel file system that is POSIX compliant and supports 
hundreds of petabytes and billions of files in a single namespace. Clients 
connect to the front-end using NAS protocols or directly to the back-end 
storage network with a dedicated client. Value-add data services enable 
integrated data protection and policy-based movement of files between multiple 
tiers of primary and secondary storage, including cloud. Storage and 
server-agnostic, SNFS may be run on customer-supplied hardware, or Quantum 
server and storage nodes. 


                     Solution Under Test Bill of Materials                     
                     =====================================                     

Item                                                                          
No   Qty  Type       Vendor     Model/Name Description                        
---- ---- ---------- ---------- ---------- -----------------------------------
   1    1 Parallel   Quantum    StorNext   High-Performance, parallel file    
          File                  V7.01      system, scales across storage      
          System                           nodes, capacity and performance,   
                                           multiple OS support                
   2   10 F1000      Quantum    F-Series   Single Node, F-1000, each node has,
          Storage               NVMe       10 Micron 9300 MTFDHAL15T3TDP      
          Node                  Storage    15.36TB NVMe SSD, Single AMD EPYC  
                                           Proecessor (7261 8 -Core Proc      
                                           @2.5GHz), 64GB Memeory, 2 x Dual   
                                           port Mellanox ConnectX-5 100GbE HBA
                                           (MCX518A-CCAT). 2 x 100GbE         
                                           connections to the switch fabric, 1
                                           per ethernet adaptor               
   3   14 Clients    Quantum    Xcellis    Quantum XWE, each single 1U server 
                                Workflow   has, 192GB memory, Dual CPU        
                                Extender   Intel(R) Xeon(R) Silver 4110 CPU @ 
                                (XWE) Gen2 2.10GHz 8 Core, 2 x Dual Port      
                                           100GbE Mellanox MT28800 [ConnectX-5
                                           Ex]. 2 x 100GbE connections to the 
                                           switch fabric, 1 per ethernet      
                                           adaptor                            
   4    1 Metadata   Quantum    Xcellis    Dual 1U server with HA (high       
          Controller            Workflow   availability), each server has,    
                                Director   64GB memory, Dual CPU Intel(R)     
                                (XWD) Gen2 Xeon(R) Silver 4110 CPU @ 2.10GHz 8
                                           Core, 1 x Dual Port 100GbE Mellanox
                                           MT28800 [ConnectX-5 ] each         
                                           ConnectX-5 card connects with a    
                                           single DAC connection to switch    
                                           infrastructure, only for           
                                           administrative purposes. Note:     
                                           Secondary node is also being used  
                                           as "SPEC Prime".                   
   5    2 100GbE     Arista     Arista     32 Port, 100GbE Ethernet switch    
          switch                DCS-7060CX                                    
                                2-32S-F                                       
   6    1 1GbE       Netgear    Netgear    48 Port, 1GbE Ethernet switch      
          switch                ProSAFE                                       
                                GS752TP                                       
   7    1 1GbE       Dell       Dell Power 48 Port, 1GbE Ethernet switch      
          switch                Connect                                       
                                6248                                          


                            Configuration Diagrams                             
                            ======================                             

1) sfs2014-20210126-00075.config1.pdf (see SPEC SFS2014 results webpage)

                              Component Software                               
                              ==================                               

Item                           Name and                                        
No   Component    Type         Version      Description                        
---- ------------ ------------ ------------ -----------------------------------
   1 Storage Node Operating    Quantum CSP  Node Cloud Storage Platform (CSP)  
                  System       1.2.0                                           
   2 Client       Operating    CentOS 7.7   Operating system on load generator 
                  System                    or clients                         
   3 StorNext     Operating    CentOS 7.7   Operating system on metadata       
     Metadata     System                    controller                         
     controllers                                                               
   4 SPEC SFS     Operating    CentOS 7.7   Operating system on SPEC SFS Prime 
     Prime        System                                                       


                 Hardware Configuration and Tuning - Physical                  
                 ============================================                  

+----------------------------------------------------------------------+
|                            Component Name                            |
+----------------------------------------------------------------------+
Parameter Name  Value           Description                             
--------------- --------------- ----------------------------------------
Tuning Param    Tuning Param    Tuning Param Description                
Name            Value                                                   

                    Hardware Configuration and Tuning Notes                    
                    ---------------------------------------                    

F1000 Nodes were all stock installation no other hardware alterations needed 



                 Software Configuration and Tuning - Physical                  
                 ============================================                  

+----------------------------------------------------------------------+
|                          F1000 Storage Node                          |
+----------------------------------------------------------------------+
Parameter Name  Value           Description                             
--------------- --------------- ----------------------------------------
Jumbo Frames    9000            set Jumbo frames to 9000                
+----------------------------------------------------------------------+
|                                Client                                |
+----------------------------------------------------------------------+
Parameter Name  Value           Description                             
--------------- --------------- ----------------------------------------
Jumbo Frames    9000            set Jumbo frames to 9000                
nr_requests     512             maximum number of read and write        
                                requests that can be queued at one time 
scheduler       noop            I/O scheduler in Linux                  

                    Software Configuration and Tuning Notes                    
                    ---------------------------------------                    

Each client mount options: 
cachebufsize=128k,buffercachecap=16384,dircachesize=32m,buffercache_iods=32,bufferlowdirty=6144,bufferhighdirty=12288 
Both scheduler and nr_requests set in /usr/cvfs/config/deviceparams on each 
client 


                               Service SLA Notes                               
                               -----------------                               

None 



                            Storage and Filesystems                            
                            =======================                            

Item                                                          Stable        
No   Description                           Data Protection    Storage    Qty
---- ------------------------------------- ------------------ -------- -----
   1 Micron 9300 15.36TB NVMe SSD, 10 per  RAID 10            Yes        100
     storage node                                                           

        Number of Filesystems  1                                             
               Total Capacity  431 TiB                                       
              Filesystem Type  StorNext                                      

                           Filesystem Creation Notes                           
                           -------------------------                           

Created a StorNext file system across all 10 nodes. Total 20 LUNS with 100 NVMe 
disks. A single Stripe group, stripebreadth=2.5MB, round-robin pattern. 
Metadata and User data combined into a single Stripe group and striped across 
all LUNS in the stripe group 


                         Storage and Filesystem Notes                          
                         ----------------------------                          

Each F1000 contains 10 NVMe devices configured in RAID 10, then sliced into two 
LUNS per node. 



                      Transport Configuration - Physical                       
                      ==================================                       

Item                 Number of                                                 
No   Transport Type  Ports Used Notes                                          
---- --------------- ---------- -----------------------------------------------
   1 100GbE              20     Storage nodes used total of 20 ports of 100GbE 
                                or 2 per node                                  
   2 100GbE              28     Load generators (Clients) used total 28 100GbE 
                                ports 2 per client                             
   3 100GbE              1      Xcellis Workflow Director Gen 2 used 1 port of 
                                100GbE per metadata controller, for            
                                administration purposes only.                  
   4 100GbE              1      Xcellis Workflow Director Secondary metadata   
                                controller Gen 2/Prime, used 1 port of 100GbE  
                                for administration purposes only.              

                         Transport Configuration Notes                         
                         -----------------------------                         

Core switch configurations were two independent 100GbE subnets, each storage 
node and client had dual 100GbE connection, one per switch. 



                              Switches - Physical                              
                              ===================                              

                                          Total  Used                          
Item                                      Port   Port                          
No   Switch Name          Switch Type     Count  Count Notes                   
---- -------------------- --------------- ------ ----- ------------------------
   1 Arista DCS-          100GbE          64     50     48 ports for storage   
     7060CX2-32S-F, total                              and clients, 1 port for 
     of 2 switches                                     Primary metadata        
                                                       controller and  1 port  
                                                       for Prime/metadata      
                                                       controller              
   2 Netgear ProSAFE      1GbE            48     47    Management switch for   
     GS752TP                                           metadata traffic,       
                                                       administrative access   
                                                       SUT, includes additional
                                                       6 ports for Xcellis     
                                                       Workflow Director       
   3 Dell PowerConnect    1GbE            48     41    Management switch for   
     6248                                              metadata traffic,       
                                                       administrative access   
                                                       SUT                     


                        Processing Elements - Physical                         
                        ==============================                         

Item                                                                           
No    Qty Type     Location       Description               Processing Function
---- ---- -------- -------------- ------------------------- -------------------
   1   10 CPU      Storage Node   AMD EPYC 7261 8-Core      Storage            
                                  Processor, @ 2.5Ghz                          
   2   28 CPU      Load            Intel(R) Xeon(R) Silver  StorNext Client    
                   Generator,     4110 CPU 8 Core @ 2.10GHz                    
                   Client                                                      
   3    2 CPU      Prime, Client   Intel(R) Xeon(R) Silver  SPEC2014 Prime     
                                  4110 CPU 8 Core @ 2.10GHz                    

                           Processing Element Notes                            
                           ------------------------                            

None 



                               Memory - Physical                               
                               =================                               

                           Size in   Number of                           
Description                  GiB     Instances  Nonvolatile   Total GiB  
------------------------- ---------- ---------- ------------ ------------
F1000 Storage memory          64         10          V           640     
Client memory                192         14          V           2688    
Prime memory                  64         1           V            64     
                                Grand Total Memory Gibibytes     3392    

                                 Memory Notes                                  
                                 ------------                                  

Storage nodes have 64GB of memory for a total of 640GB. Clients have 192GB of 
memory each for a total of 2,688GB. The Prime has 64GB of memory 



                                Stable Storage                                 
                                ==============                                 

The F1000 storage node does not use write cache to temporarily store data in 
flight, writes are therefore committed to disk immediately. All Data is 
protected in the file system by a RAID 10 storage configuration including 
metadata. The entire SUT is protected with redundant power supplies, both 
storage nodes and clients. F1000 and clients have two NVMe system devices in a 
1+1 configuration for OS redundency. Metadata servers are configured in a high 
availability state, this ensures file system access in the event of a metadata 
server failure. 


                    Solution Under Test Configuration Notes                    
                    =======================================                    

The solution is a standardized configuration by Quantum with Xcellis workflow 
directors managing metadata for the file system. F1000 storage nodes are off 
the shelf nodes designed for high perpormance streaming media in addition to 
high IOPS for very randomized workflows. The file system was configured, as per 
the "Filesystem Creation Notes" listed above. The purpose of this is to take 
into account a mixed workflow of random and sequential processes, including 
combining metadata striped across the file system. 


                             Other Solution Notes                              
                             ====================                              

None 


                                   Dataflow                                    
                                   ========                                    

The entire SUT is connected per the SUT diagram. Each of the 14 clients is 
direct connected using iSER/RDMA. No special tuning parameters were used in 
tuning the connections. One file system was created and shared as StorNext file 
system. 


                                  Other Notes                                  
                                  ===========                                  

None 


                              Other Report Notes                               
                              ==================                               

None 



===============================================================================

Generated on Thu Feb 18 15:16:38 2021 by SpecReport
Copyright (C) 2016-2021 Standard Performance Evaluation Corporation