![]() |
SPECstorage™ Solution 2020_eda_blended ResultCopyright © 2016-2025 Standard Performance Evaluation Corporation |
| Sangfor Technologies Inc. | SPECstorage Solution 2020_eda_blended = 2500 Job_Sets |
|---|---|
| Sangfor Unified Storage F8000 with 2 nodes | Overall Response Time = 0.47 msec |
|
|
| Sangfor Unified Storage F8000 with 2 nodes | |
|---|---|
| Tested by | Sangfor Technologies Inc. | Hardware Available | October 2025 | Software Available | October 2025 | Date Tested | September 2025 | License Number | 7066 | Licensee Locations | Shenzhen, Guangdong Province, China |
This product provides unified hosting for all types of services, enabling global business hosting with worry-free architecture evolution. It features unified management of both hot and cold data, eliminating the need to differentiate between fast and slow media, thus ensuring a consistent business experience. Furthermore, it offers unified storage for data of any size, allowing for flexible on-demand expansion while optimizing storage costs. This solution is designed to simplify data processing and management, enhance operational efficiency, and help businesses effectively adapt to changing market demands.
| Item No | Qty | Type | Vendor | Model/Name | Description |
|---|---|---|---|---|---|
| 1 | 1 | Storage System | Sangfor | Sangfor Unified Storage | The storage configuration consisted of 1 Sangfor Unified Storage F8000 HA pair (2 controller nodes in total). A Sangfor Unified Storage F8000 HA pair can accommodate up to 24 NVMe SSDs. |
| 2 | 6 | Network Interface Card | NVIDIA | NVIDIA ConnectX-7 200GbE/NDR200 | 2-port 200Gb/s InfiniBand Network Card, 1 card per controller,1 card per client |
| 3 | 2 | Network Interface Card | Mellanox | CX4121A | 2-port 25GbE Network Card, 1 card per controller. |
| 4 | 1 | Switch | Mellanox | Mellanox QM8790 | Used for data connection between clients and storage systems. |
| 5 | 1 | Switch | Mellanox | Mellanox MSN2010-CB2F | Used for cluster connection between controllers. |
| 6 | 2 | Client | Huaqin | Each client also contains 2 Intel(R) Xeon(R) Gold 5418Y CPU @2.00GHz with 24 cores, 8 DDR5 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7LH480HAHQ-00005). All 2 clients are used to generate the workload, 1 is also used as Prime Client. | |
| 7 | 1 | Client | SANGFOR | Each client also contains 2 Intel(R) Xeon(R) Gold 5318Y CPU @2.10GHz with 24 cores, 8 DDR4 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7L3480HCHQ-00B7C).Clients are used to generate the workload. | |
| 8 | 1 | Client | Supermicro | Each client also contains 1 Intel(R) Xeon(R) Gold 5512U CPU @2.10GHz with 28 cores, 8 DDR5 32GB DIMMs, 1TB NVMe SSD(Device Model:SAMSUNG MZ1L2960HCJR-00A07).Clients are used to generate the workload. | |
| 9 | 24 | Solid-State Drive | Union Memory | UH812a 3.84TB NVMe | TLC NVMe SSD for data storage, 24 SSDs in total. |
| Item No | Component | Type | Name and Version | Description |
|---|---|---|---|---|
| 1 | Linux | Operating System | Centos 8.5(kernel 4.18.0-348.el8.x86_64) | Operating System(OS) for 4 clients |
| 2 | Sangfor Unified Storage F8000 | Storage System | Sangfor EDS v5.3.0 | Storage System |
| Storage | Parameter Name | Value | Description |
|---|---|---|
| NA | NA | NA |
None
| clients | Parameter Name | Value | Description |
|---|---|---|
| protocol | rdma | NFS mount options for protocol |
| nfsvers | 3 | NFS mount options for NFS version |
| port | 20049 | NFS mount options for connection port of NFS over rdma |
The Sangfor Unified Storage F8000 provides a unified file system. Once the environment is deployed, the file system is created automatically, and no additional commands are required for its creation. The command for mounting the client directory is as follws: mount -t nfs -o vers=3,rdma,port=20049 {server_ip}:/{share_name} {mount_point}. We have created 24 NFS shares directories. Due to differences in CPU processing capabilities among clients, the number of mount points used per client varies: clients equipped with two Intel Xeon Gold 5418Y processors use 15 mount points each, while the other clients use 9 mount points each, as shown in the mount points specified in the configuration file.
None
| Item No | Description | Data Protection | Stable Storage | Qty |
|---|---|---|---|---|
| 1 | The 3.84TB NVMe drive is used as a data drive to form a storage pool of EC16+2 | EC | Yes | 24 |
| 2 | 1TB NVMe SSD, 1 per controller; used as boot media | none | Yes | 2 |
| Number of Filesystems | 1 | Total Capacity | 83.84TiB | Filesystem Type | Sangfor Phoenix |
|---|
All data disks of all nodes were used when creating the file system
The storage configuration includes 1 Sangfor Unified Storage F8000 HA pair (2 controller nodes in total). In the following context, the terms controller or node refer to the controller nodes. The storage system utilizes a full-stack design combining "software and hardware collaboration + SDS 3.0": The underlying layer is based on NVMe SSDs, connected to integrated disk-controller servers via RDMA/NVMe-oF high-speed networks. The core comprises a distributed indexing layer (with ROW append writes, global wear leveling, and hot and cold data flow) and a persistence layer (with EC/replication and active-active metadata), providing data layout and reliability assurance. The upper layer integrates block, file, and object full-protocol gateways, and includes built-in data services such as snapshots, cloning, QoS, and remote replication. With end-to-end load balancing, this single architecture can simultaneously meet the demands of extremely low-latency small files, high-throughput large files, and mixed multi-protocol workloads.
| Item No | Transport Type | Number of Ports Used | Notes |
|---|---|---|---|
| 1 | 200Gb InfiniBand | 12 | storage system used 4 200Gb/s InfiniBand connections to the switch and Clients used 8 200Gb/s InfiniBand connections to the switch (2*200Gb/s InfiniBand for each client) |
| 2 | 25GbE | 4 | 2 ports per controller, bonded lacp mode for Cluster Interconnect. |
storage system used 4 * 200Gb/s InfiniBand ports for data transport(2*200Gb/s InfiniBand per controller), each client connect to the switch used 2* 200Gb/s InfiniBand ports. Used 2*25GbE(lacp mode) ports for cluster interconnect.
| Item No | Switch Name | Switch Type | Total Port Count | Used Port Count | Notes |
|---|---|---|---|---|---|
| 1 | Mellanox QM8790 | 200Gb InfiniBand | 40 | 12 | storage system used 4 connections,2 ports per controller node. clients used 8 connections, 2 ports per client |
| 2 | Mellanox MSN2010-CB2F | 25GbE | 18 | 4 | storage system used 4 connections,2 ports per controller node |
| Item No | Qty | Type | Location | Description | Processing Function |
|---|---|---|---|---|---|
| 1 | 2 | CPU | Storage Controller | Intel(R) Xeon(R) Platinum 8592+ CPU @1.9GHz with 64 cores | NFS, RDMA, and Storage Controller functions |
| 2 | 4 | CPU | Clients | Intel(R) Xeon(R) Gold 5418Y CPU @2.00GHz with 24 cores | NFS Client, Linux OS |
| 3 | 2 | CPU | Clients | Intel(R) Xeon(R) Gold 5318Y CPU @2.10GHz with 24 cores | NFS Client, Linux OS |
| 4 | 1 | CPU | Clients | Intel(R) Xeon(R) Gold 5512U CPU @2.10GHz with 28 cores | NFS Client, Linux OS |
Each controller node contains 1 Intel(R) Xeon(R) Platinum 8592+ processor with 64 cores each;1.9GHZ. 2 clients each contains 2 Intel(R) Xeon(R) Gold 5418Y processors with 24 cores each;2.00GHz. 1 client contains 2 Intel(R) Xeon(R) Gold 5318Y processors wich 24 cores each;2.10GHz. 1 client contains 1 Intel(R) Xeon(R) Gold 5512U processor wich 28 cores each;2.10GHz.
| Description | Size in GiB | Number of Instances | Nonvolatile | Total GiB |
|---|---|---|---|---|
| The memory of one controller node in the Sangfor Unified Storage F8000 HA pair | 256 | 2 | V | 512 |
| Memory for each of 4 clients | 256 | 4 | V | 1024 | Grand Total Memory Gibibytes | 1536 |
None
Sangfor Unified Storage does not use any internal memory to temporarily cache write data to the underlying storage system, all writes are directly submitted to the disk, and protected via Sangfor Unified Storage distributed data protection (16+2 in this case). There is no need for any RAM battery protection. Sangfor Unified Storage is an active active cluster high availability system, where SSDs can be accessed from two controllers through dual ports. In the event of a controller failure or power outage, each controller can take over the other and continue accessing data.
1. Front-end and back-end I/O collaboration: Phoenix InFlash intelligent I/O technology deeply integrates the characteristics of flash memory to reduce service latency. 2. SDPC (Software Defined Persistent Cache) uses write-through technology to write cached data directly to the Persistent Memory Region (PMR) of the NVMe flash drive, with triple replication. This eliminates the risk of data loss during single-controller operation. It eliminates the need for a Battery Back Up, simplifying operations and maintaining high reliability.
None
Please reference the configuration diagram. Four clients were used to generate the workload; one of the clients also acted as the Prime Client to manage the other three workload clients. Each client was connected to the Mellanox switch via two 200Gb/s InfiniBand links. Two controller nodes were deployed, each connected to the data switch through two 200Gb/s InfiniBand links. and the clients mounted the shared directories using the NFSv3 protocol. The cluster provided access to the file system through all four 200Gb/s InfiniBand ports connected to the data switch.
All servers have been installed with Spectre/Meltdown patches to address potential data leakage risks.
SANGFOR is a registered trademark of Sangfor Technologies Inc., Sangfor Technologies Building, No.16 Xiandong Road, Xili Street, Nanshan District, Shenzhen City, Guangdong Province 518055, China
Generated on Tue Oct 14 15:09:59 2025 by SpecReport
Copyright © 2016-2025 Standard Performance Evaluation Corporation