SPECstorage™ Solution 2020_eda_blended Result

Copyright © 2016-2025 Standard Performance Evaluation Corporation

Sangfor Technologies Inc. SPECstorage Solution 2020_eda_blended = 2500 Job_Sets
Sangfor Unified Storage F8000 with 2 nodes Overall Response Time = 0.47 msec


Performance

Business
Metric
(Job_Sets)
Average
Latency
(msec)
Job_Sets
Ops/Sec
Job_Sets
MB/Sec
2500.1351125061814
5000.1692250123631
7500.1973375195446
10000.1984500257261
12500.2635625319076
15000.30367503710892
17500.40478754412706
20000.53690004514523
22500.866101255416339
25002.478112505018151
Performance Graph


Product and Test Information

Sangfor Unified Storage F8000 with 2 nodes
Tested bySangfor Technologies Inc.
Hardware AvailableOctober 2025
Software AvailableOctober 2025
Date TestedSeptember 2025
License Number7066
Licensee LocationsShenzhen, Guangdong Province, China

This product provides unified hosting for all types of services, enabling global business hosting with worry-free architecture evolution. It features unified management of both hot and cold data, eliminating the need to differentiate between fast and slow media, thus ensuring a consistent business experience. Furthermore, it offers unified storage for data of any size, allowing for flexible on-demand expansion while optimizing storage costs. This solution is designed to simplify data processing and management, enhance operational efficiency, and help businesses effectively adapt to changing market demands.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Storage SystemSangforSangfor Unified StorageThe storage configuration consisted of 1 Sangfor Unified Storage F8000 HA pair (2 controller nodes in total). A Sangfor Unified Storage F8000 HA pair can accommodate up to 24 NVMe SSDs.
26Network Interface CardNVIDIANVIDIA ConnectX-7 200GbE/NDR2002-port 200Gb/s InfiniBand Network Card, 1 card per controller,1 card per client
32Network Interface CardMellanoxCX4121A2-port 25GbE Network Card, 1 card per controller.
41SwitchMellanoxMellanox QM8790Used for data connection between clients and storage systems.
51SwitchMellanoxMellanox MSN2010-CB2FUsed for cluster connection between controllers.
62ClientHuaqinEach client also contains 2 Intel(R) Xeon(R) Gold 5418Y CPU @2.00GHz with 24 cores, 8 DDR5 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7LH480HAHQ-00005). All 2 clients are used to generate the workload, 1 is also used as Prime Client.
71ClientSANGFOREach client also contains 2 Intel(R) Xeon(R) Gold 5318Y CPU @2.10GHz with 24 cores, 8 DDR4 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7L3480HCHQ-00B7C).Clients are used to generate the workload.
81ClientSupermicroEach client also contains 1 Intel(R) Xeon(R) Gold 5512U CPU @2.10GHz with 28 cores, 8 DDR5 32GB DIMMs, 1TB NVMe SSD(Device Model:SAMSUNG MZ1L2960HCJR-00A07).Clients are used to generate the workload.
924Solid-State DriveUnion MemoryUH812a 3.84TB NVMeTLC NVMe SSD for data storage, 24 SSDs in total.

Configuration Diagrams

  1. Sangfor Unified Storage Test Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1LinuxOperating SystemCentos 8.5(kernel 4.18.0-348.el8.x86_64)Operating System(OS) for 4 clients
2Sangfor Unified Storage F8000Storage SystemSangfor EDS v5.3.0Storage System

Hardware Configuration and Tuning - Physical

Storage
Parameter NameValueDescription
NANANA

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

clients
Parameter NameValueDescription
protocolrdmaNFS mount options for protocol
nfsvers3NFS mount options for NFS version
port20049NFS mount options for connection port of NFS over rdma

Software Configuration and Tuning Notes

The Sangfor Unified Storage F8000 provides a unified file system. Once the environment is deployed, the file system is created automatically, and no additional commands are required for its creation. The command for mounting the client directory is as follws: mount -t nfs -o vers=3,rdma,port=20049 {server_ip}:/{share_name} {mount_point}. We have created 24 NFS shares directories. Due to differences in CPU processing capabilities among clients, the number of mount points used per client varies: clients equipped with two Intel Xeon Gold 5418Y processors use 15 mount points each, while the other clients use 9 mount points each, as shown in the mount points specified in the configuration file.

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1The 3.84TB NVMe drive is used as a data drive to form a storage pool of EC16+2ECYes24
21TB NVMe SSD, 1 per controller; used as boot medianoneYes2
Number of Filesystems1
Total Capacity83.84TiB
Filesystem TypeSangfor Phoenix

Filesystem Creation Notes

All data disks of all nodes were used when creating the file system

Storage and Filesystem Notes

The storage configuration includes 1 Sangfor Unified Storage F8000 HA pair (2 controller nodes in total). In the following context, the terms controller or node refer to the controller nodes. The storage system utilizes a full-stack design combining "software and hardware collaboration + SDS 3.0": The underlying layer is based on NVMe SSDs, connected to integrated disk-controller servers via RDMA/NVMe-oF high-speed networks. The core comprises a distributed indexing layer (with ROW append writes, global wear leveling, and hot and cold data flow) and a persistence layer (with EC/replication and active-active metadata), providing data layout and reliability assurance. The upper layer integrates block, file, and object full-protocol gateways, and includes built-in data services such as snapshots, cloning, QoS, and remote replication. With end-to-end load balancing, this single architecture can simultaneously meet the demands of extremely low-latency small files, high-throughput large files, and mixed multi-protocol workloads.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1200Gb InfiniBand12storage system used 4 200Gb/s InfiniBand connections to the switch and Clients used 8 200Gb/s InfiniBand connections to the switch (2*200Gb/s InfiniBand for each client)
225GbE42 ports per controller, bonded lacp mode for Cluster Interconnect.

Transport Configuration Notes

storage system used 4 * 200Gb/s InfiniBand ports for data transport(2*200Gb/s InfiniBand per controller), each client connect to the switch used 2* 200Gb/s InfiniBand ports. Used 2*25GbE(lacp mode) ports for cluster interconnect.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Mellanox QM8790200Gb InfiniBand4012storage system used 4 connections,2 ports per controller node. clients used 8 connections, 2 ports per client
2Mellanox MSN2010-CB2F25GbE184storage system used 4 connections,2 ports per controller node

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
12CPUStorage ControllerIntel(R) Xeon(R) Platinum 8592+ CPU @1.9GHz with 64 cores NFS, RDMA, and Storage Controller functions
24CPUClientsIntel(R) Xeon(R) Gold 5418Y CPU @2.00GHz with 24 coresNFS Client, Linux OS
32CPUClientsIntel(R) Xeon(R) Gold 5318Y CPU @2.10GHz with 24 coresNFS Client, Linux OS
41CPUClientsIntel(R) Xeon(R) Gold 5512U CPU @2.10GHz with 28 coresNFS Client, Linux OS

Processing Element Notes

Each controller node contains 1 Intel(R) Xeon(R) Platinum 8592+ processor with 64 cores each;1.9GHZ. 2 clients each contains 2 Intel(R) Xeon(R) Gold 5418Y processors with 24 cores each;2.00GHz. 1 client contains 2 Intel(R) Xeon(R) Gold 5318Y processors wich 24 cores each;2.10GHz. 1 client contains 1 Intel(R) Xeon(R) Gold 5512U processor wich 28 cores each;2.10GHz.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
The memory of one controller node in the Sangfor Unified Storage F8000 HA pair2562V512
Memory for each of 4 clients2564V1024
Grand Total Memory Gibibytes1536

Memory Notes

None

Stable Storage

Sangfor Unified Storage does not use any internal memory to temporarily cache write data to the underlying storage system, all writes are directly submitted to the disk, and protected via Sangfor Unified Storage distributed data protection (16+2 in this case). There is no need for any RAM battery protection. Sangfor Unified Storage is an active active cluster high availability system, where SSDs can be accessed from two controllers through dual ports. In the event of a controller failure or power outage, each controller can take over the other and continue accessing data.

Solution Under Test Configuration Notes

1. Front-end and back-end I/O collaboration: Phoenix InFlash intelligent I/O technology deeply integrates the characteristics of flash memory to reduce service latency. 2. SDPC (Software Defined Persistent Cache) uses write-through technology to write cached data directly to the Persistent Memory Region (PMR) of the NVMe flash drive, with triple replication. This eliminates the risk of data loss during single-controller operation. It eliminates the need for a Battery Back Up, simplifying operations and maintaining high reliability.

Other Solution Notes

None

Dataflow

Please reference the configuration diagram. Four clients were used to generate the workload; one of the clients also acted as the Prime Client to manage the other three workload clients. Each client was connected to the Mellanox switch via two 200Gb/s InfiniBand links. Two controller nodes were deployed, each connected to the data switch through two 200Gb/s InfiniBand links. and the clients mounted the shared directories using the NFSv3 protocol. The cluster provided access to the file system through all four 200Gb/s InfiniBand ports connected to the data switch.

Other Notes

All servers have been installed with Spectre/Meltdown patches to address potential data leakage risks.

Other Report Notes

SANGFOR is a registered trademark of Sangfor Technologies Inc., Sangfor Technologies Building, No.16 Xiandong Road, Xili Street, Nanshan District, Shenzhen City, Guangdong Province 518055, China


Generated on Tue Oct 14 15:09:59 2025 by SpecReport
Copyright © 2016-2025 Standard Performance Evaluation Corporation