SPECstorage™ Solution 2020_ai_image Result

Copyright © 2016-2026 Standard Performance Evaluation Corporation

Pure Storage SPECstorage Solution 2020_ai_image = 6300 AI_Jobs
Pure Storage FlashBlade//EXA Overall Response Time = 0.97 msec


Performance

Business
Metric
(AI_Jobs)
Average
Latency
(msec)
AI_Jobs
Ops/Sec
AI_Jobs
MB/Sec
6300.33427408661610
12600.359548172123227
18900.702822258184827
25200.7101096344246449
31500.8271370429308051
37800.9701644515369690
44101.1571918602431277
50401.3552192687492872
56701.6012466770554501
63001.8462740854616129
Performance Graph


Product and Test Information

Pure Storage FlashBlade//EXA
Tested byPure Storage
Hardware AvailableJune 2025
Software AvailableJune 2025
Date TestedJanuary 2026
License Number9072
Licensee Locations2555 Augustine Drive, Santa Clara, CA 9505

Pure Storage’s FlashBlade//EXA is an ultra-scale, disaggregated data storage platform built to deliver extreme throughput, low-latency metadata performance, and seamless scalability for large-scale AI and high-performance computing workloads.

It delivers this via RDMA-enabled pNFS, and is validated using SPECstorage Solution 2020_ai_image.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Metadata Node systemPure StorageFlashBlade//EXAThe Metadata Node system was a 3-chassis multi-chassis configuration with 2 eXternal Fabric Modules (XFMs). Each XFM had 4 x 400Gbps Uplink ports. Each chassis is connected to each XFM with 4x100Gb uplink ports. Each chassis was equipped with 10 x S500R1 FlashBlade//EXA blades. Each blade was equipped with 2 × 37.5TB N58R DirectFlash Modules (DFMs). { Metadata Node system details: [ 2 x eXternal Fabric Modules (XFMs) - Model: XFM-8400 - Part Number: 86-0001-04 ] [ 3 x FlashBlade Chassis - Model: CH-FB-II - Part Number: 83-0383-12 ] [ 10 x FlashBlade S500R1 blades per chassis - Model: FB-S500 - Part Number: 83-0433-08 ] [ 2 x DirectFlash Modules (DFMs) per blade - Raw Capacity: 37.50 TB (34.11 TiB) - Part Number: 83-0489-06 ] [ Pure Storage does not publish publicly accessible specifications for the components of the FlashBlade Metadata Node system. Detailed specifications are available only through customer or partner support documentation. A Technical Deep Dive on FlashBlade//S can be found at: https://www.purestorage.com/video/technical-deep-dive-on-flashblade/6307195175112.html ] }
230Data NodeSupermicroSupermicro Data NodesSupermicro ASG-1115S-NE316R servers {CPU = [single-socket AMD EPYC 9355P processor with 32 physical cores (64 logical CPUs via SMT) on a 64-bit x86 architecture]} {MEMORY = (192 GB of DDR5 ECC memory).} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSDs = [ Each data node has 16 x KIOXIA KCM7DRJE3T84 3.84 TB enterprise NVMe SSDs installed. ] [ Each data node had 61.45 TB of raw capacity and 28.88 TB of usable capacity. ]} {Operating System = [The FlashBlade//EXA Data Node Operating System (Purity//DN) was loaded onto each data node. Security scanning for Purity//DN (the Data Node OS for FlashBlade//EXA) is performed as part of the release process. Purity//DN does not provide mechanisms for non-administrative users to run third-party code, and thus is not affected by common OS vulnerabilities.]}
360Host InitiatorSupermicroUbuntu 24.04 Bare-Metal Host InitiatorsSupermicro ASG-1115S-NE316R servers {CPU = [single-socket AMD EPYC 9355P processor with 32 physical cores (64 logical CPUs via SMT) on a 64-bit x86 architecture]} {MEMORY = (192 GB of DDR5 ECC memory).} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSD = [1 x Micron 7450-series MTFDKBA480TFR 480 GB enterprise NVMe SSD (NVMe 1.4, PCIe-attached) with full SMART support and 0% media wear, used as a local system disk.]} {Operating System = [Ubuntu 24.04.3 LTS, Kernel Linux 6.14.6clearflag-v1+]}
420Host InitiatorSupermicroUbuntu 24.04 Bare-Metal Host InitiatorsSupermicro SYS-621C-TN12R servers {CPU = [ dual-socket Intel Xeon Silver 4516Y+ platform with 48 physical cores (96 logical CPUs via SMT) on a 64-bit x86 architecture ]} {MEMORY = (1024 GB of DDR5 ECC memory - reduced to 198752M via GRUB - GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mem=198752M" )} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSD = [1 x Micron 5400-series MTFDDAK240TGA 240 GB enterprise SATA SSD (2.5-inch, SATA 6 Gb/s) with full SMART support and 100% remaining endurance, used as the system disk.]} {Operating System = [Ubuntu 24.04.3 LTS, Kernel Linux 6.14.6clearflag-v1+]}
58Data Network SwitchNVIDIANVIDIA SN5600 data network switches2 x NVIDIA SN5600 spine data network switches 6 x NVIDIA SN5600 leaf data network switches ( https://docs.nvidia.com/networking/display/sn5000/specifications#src-2705811927_Specifications-SN5600Specifications )

Configuration Diagrams

  1. Pure Storage FlashBlade//EXA lab logical diagram
  2. Pure Storage FlashBlade//EXA Physical Architecture diagram
  3. Pure Storage FlashBlade//EXA Metadata Node Connectivity Stack diagram
  4. Pure Storage FlashBlade//EXA Data Node Connectivity Stack diagram
  5. Pure Storage FlashBlade//EXA S500R1 3-chassis
  6. Pure Storage FlashBlade//EXA XFM Type
  7. Pure Storage FlashBlade//EXA XFM Uplink Port Speed
  8. Pure Storage FlashBlade//EXA XFM Downlink Port Speed
  9. Pure Storage FlashBlade//EXA Chassis Type
  10. Pure Storage FlashBlade//EXA Blade Type
  11. Pure Storage FlashBlade//EXA DFM Type
  12. Pure Storage FlashBlade//EXA FIOM Uplink Port Speed
  13. Pure Storage FlashBlade//EXA Datanode Partial List
  14. Pure Storage FlashBlade//EXA Datanode Partial List Raw and Usable Capacity

Component Software

Item NoComponentTypeName and VersionDescription
1Operating System (Initiators)Host OSUbuntu 24.04.3 LTS, Kernel Linux 6.14.6clearflag-v1+Ubuntu 24.04.3 LTS installed on all 80 bare-metal initiator hosts. Kernel Linux 6.14.6clearflag-v1+ (Backported “NFSv4/pNFS: Clear NFS_INO_LAYOUTCOMMIT in pnfs_mark_layout_stateid_invalid”).
2FlashBlade Purity//FBMetadata Node System Operating SystemPurity//FB 4.6.4 (GA build)Purity//FB is the proprietary FlashBlade operating environment responsible for managing DFMs (DirectFlash Modules), handling distributed metadata, RDMA/NFS protocol processing, and ensuring data integrity. The 4.6.4 release was used without patches to represent production-ready software. File striping was not enabled for this test.
3The FlashBlade//EXA Data Node Operating System (Purity//DN)Data Node Operating SystemPurity//DN 1.0 (GA build)Purity//DN is the dedicated operating system and services stack for Data Nodes in FlashBlade//EXA systems, designed to deliver high performance, high availability, and advanced management for scale-out storage environments. Purity//DN runs on the Data Nodes (DN) of FlashBlade//EXA, providing the core OS and services required for data storage, management, and high-throughput operations. It is distinct from Purity//FB (the FlashBlade controller OS) and follows a separate release cycle, though releases are generally aligned with major Purity//FB feature releases for compatibility.
4Networking StackNetwork StackpNFS with RDMA-enabled 400Gbps NICsThis includes all firmware and kernel-level drivers supporting RoCE (RDMA over Converged Ethernet) for ultra-low latency. Each host initiator and each data node has 2 x NVIDIA (Mellanox) ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled. BGP (Border Gateway Protocol) networking was used. Native OS-provided NFS client, drivers, and tools were used.
5pNFS ConfigurationFile System ClientNFSv4.1 mount with nconnect=16, RDMAThe Namespace and filesystem metadata are served by a Metadata Node system that clients mount via NFSv4.1 over TCP. While the metadata service provides data layout information, host initiators communicate directly with the data nodes using pNFS semantics and RDMA for high-performance data access.

Hardware Configuration and Tuning - Physical

NFS Client
Parameter NameValueDescription
nconnect16Enables multiple transport connections per mount

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

SPECstorage Solution 2020_ai_image Workload Engine
Parameter NameValueDescription
NodeManager Count80Ensures balanced load generation

Software Configuration and Tuning Notes

Ensures storage reaches steady state before measurement phase. Required by SPECstorage Solution 2020 rules.

Service SLA Notes

Purity//FB 4.6.4 and Purity/DN was installed and operated under internal support governance with direct engineering oversight.

No patches or hotfixes were applied during the benchmark run.

The Ethernet switching fabric used in the testbed environment was configured by the internal networking team.

Run-time software (SPECstorage Solution 2020_ai_image, netmist, and nodeManagers) was deployed uniformly across all 80 host initiators.

All components in the testbed were physically hosted and internally maintained; no cloud infrastructure or external SLA dependencies were involved.

The testbed environment matched expected production-grade deployment topologies for FlashBlade customers.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Pure Storage FlashBlade//EXA blades, designed for high-performance file and object workloads. Each blade is independently addressable with embedded compute and connected via XFM-to-FIOM architecture. Provides highly parallelized metadata access, with consistent latency and fault isolation between blades.Distributed Erasure CodingFlashBlade S-class DirectFlash Modules (DFMs), 2 × 37.5TB per blade30 Metadata Node Blades across 3 Chassis
2RAID 10–style layout: (Inner layer: consists of RAID 1 mirrors of NVMe drive pairs) (Outer layer: RAID 0 striped across 8 mirrored devices) (Filesystem: XFS on top of the striped mirror set) The system uses an mdadm RAID0-over-RAID1 (striped mirrors) configuration with an XFS filesystem on top; this results in an inherent 50% capacity efficiency due to mirroring, making approximately 866 TB of usable capacity from 1843.5 TB of raw capacity mathematically consistent once RAID metadata, XFS filesystem structures, and alignment overhead are included.RAID10-style layout implemented as RAID0-over-RAID116 x KIOXIA KCM7DRJE3T84 3.84 TB enterprise NVMe SSDs are used to back the XFS filesystem.30 Data Nodes
380 Ubuntu 24.04 bare-metal initiator hosts with RDMA-enabled 400GbE NICs. Each host connects to the FlashBlade Metadata Node system via a single-port NFSv4.1 TCP mount using `nconnect=16`. Hosts generate benchmark load and issue sustained concurrent file operations for the duration of the test window.Host-based checkpointing and client retryPersistent boot NVMe, no local data retention80 Initiator Hosts
Number of Filesystems1 file systems distributed over 30 data nodes, and shared to all 80 host initiators
Total Capacity1843.5 TB Raw, 866.4 TB Usable
Filesystem TypepNFS (RDMA enabled)

Filesystem Creation Notes

The distributed filesystem was created via the FlashBlade//EXA GUI. It was configured with a NFSv4.1 export.

Storage and Filesystem Notes

When a filesystem is created on FlashBlade//EXA, the system orchestrates a series of coordinated actions across metadata node system (MDN) and data nodes (DNs), with a strong focus on scalability, performance, and data placement control.

1. Node Group Selection and Association Node Groups: Before a filesystem can be created, at least one data node group must exist. A node group is a logical collection of data nodes that will serve as the backing storage for the filesystem. This design allows administrators to control which DNs are used for specific filesystems, limiting the blast radius of failures and optimizing performance for different workloads. Association: When creating a filesystem, you must specify the node group it will use. All data for that filesystem will be placed only on the DNs in the selected group. This ensures that filesystems do not compete for IO resources across all DNs and that access can be maintained for unaffected files if a DN goes offline.

2. Filesystem Creation Workflow Metadata Node (MDN) Actions: The MDN receives the filesystem creation request (typically via the management GUI or CLI). It records the association between the new filesystem and the chosen node group. The MDN persists all relevant metadata, including the filesystem’s unique ID, node group membership, and configuration details, in its distributed metadata store (DFMs). Data Node (DN) Preparation: The MDN communicates with each DN in the node group to prepare them for the new filesystem. On each DN, an XFS file system is created atop a software RAID (MD-RAID) array, using the local SSDs. The XFS export is assigned a UUID, which is tracked by the MDN. Export and Mounting: The DNs export the new XFS filesystem over NFS (typically NFSv3 over RDMA for data, NFSv4.1 over TCP for metadata). The MDN keeps a record of each export’s UUID and IP address, ensuring that if a DN is replaced or its network changes, the system can recognize and re-associate the export.

3. Data Placement and File Mapping Placement Algorithm: When files are created within the new filesystem, the MDN uses a data placement algorithm to select which DN in the node group will store each file. The selection is based on available capacity, ensuring balanced utilization. At GA, each file is mapped to a single DN—striping across DNs is not supported. Metadata and Data Coordination: The MDN manages all metadata (directory structure, file attributes, etc.), while the DNs handle the actual file data. The MDN provides clients with the necessary information (DN IP, file handle, etc.) to access data directly on the appropriate DN.

4. Protocols and Control Protocols: The system uses NFSv4.1 (pNFS) for client-to-MDN communication and NFSv3 (FlexFile layout, often over RDMA) for client-to-DN data transfer. The MDN also uses gRPC and NFSv3 to coordinate with DNs for export management and health monitoring.

5. Manageability and Limits Node Group Constraints: You cannot create a filesystem with an empty node group, nor can you remove a DN from a node group if it is still in use by a live filesystem. No Snapshots or Quotas: At GA, features like filesystem snapshots and quotas are not supported on FlashBlade//EXA.

====

Example of mount command from host initiator:

# sudo mount -t nfs -o vers=4,nconnect=16 192.168.2.101:/specsfs2020 /mnt/specsfs2020

# mount | grep nfs

192.168.2.101:/specsfs2020 on /mnt/specsfs2020 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys, clientaddr=192.168.10.117,local_lock=none,addr=192.168.2.101)

====

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1400Gb Ethernet (RDMA-enabled)2 per each of the 80 host initiators (160 total), 2 per each of the 30 data nodes (60 total)Each initiator used a 400Gbps RDMA-capable NIC connected to multiple central switches

Transport Configuration Notes

RDMA over Converged Ethernet (RoCEv2) was used across the entire fabric.

All initiator hosts were connected via two 400Gbps RDMA NICs to central Ethernet switches.

FlashBlade XFM modules provided 4 uplinks per chassis.

All routing managed with eBGP within the data plane.

Management plane uses standard layer 3 routing.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
12 x NVIDIA SN5600 spine data network switches800Gb Ethernet Switch (RDMA-capable)128113These switches support full-speed RoCEv2 transport, and were configured with BGP (Border Gateway Protocol).
26 x NVIDIA SN5600 leaf data network switches800Gb Ethernet Switch (RDMA-capable)384262These switches support full-speed RoCEv2 transport, and were configured with BGP (Border Gateway Protocol).

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
120dual-socket Intel Xeon Silver 4516Y+ platform with 48 physical cores (96 logical CPUs via SMT) on a 64-bit x86 architectureBare-Metal Supermicro SYS-621C-TN12R Host InitiatorsEach host initiator is equipped with 1024 GB of DDR5 ECC memory reduced to 198752M via GRUB (GRUB_CMDLINE_LINUX_DEFAULT='quiet splash mem=198752M'), and 2 x 400 Gbps NVIDIA (Mellanox) ConnectX-7 EN 400GbE adapters providing RDMA over Ethernet (RoCE)Load Generation and Benchmark Execution
260single-socket AMD EPYC 9355P processor with 32 physical cores (64 logical CPUs via SMT) on a 64-bit x86 architectureBare-Metal Supermicro ASG-1115S-NE316R Host InitiatorsEach data node is equipped with 192 GB of DDR5 ECC memory and 2 x 400 Gbps NVIDIA (Mellanox) ConnectX-7 EN 400GbE adapters providing RDMA over Ethernet (RoCE).Load Generation and Benchmark Execution
330single-socket AMD EPYC 9355P processor with 32 physical cores (64 logical CPUs via SMT) on a 64-bit x86 architectureBare-Metal Supermicro ASG-1115S-NE316R Data NodesEach host initiator is equipped with 192 GB of DDR5 ECC memory, and 2 x 400 Gbps NVIDIA (Mellanox) ConnectX-7 EN 400GbE adapters providing RDMA over Ethernet (RoCE), and 16 x KIOXIA KCM7DRJE3T84 3.84 TB enterprise NVMe SSDs.Data Storage
42Pure Storage FlashBlade XFM-8400 eXternal Fabric ModuleFB//EXA MetaData Node SystemThe Pure Storage XFM-8400 is the external fabric interconnect module used in multi-chassis FlashBlade//S systems. It provides the network fabric connectivity that links multiple FlashBlade//S chassis together and connects them to host networks. In multi-chassis configurations, a pair of these XFM-8400 modules interconnects all chassis and servers, supporting high-speed optics (e.g., 10/25/40/100 Gbps QSFP and higher-speed options) to deliver scalable bandwidth for unified fast file and object workloads.FB//EXA MetaData Node System eXternal Fabric Module
56Pure Storage FlashBlade FIOM-1000 Chassis Fabric IO ModuleFB//EXA MetaData Node SystemPure Storage FlashBlade FIOM-1000 refers to the Fabric I/O Module used inside FlashBlade//S chassis. It is a hot-swappable midplane network and I/O module that provides the internal fabric connectivity between the blades and the rest of the system’s networking infrastructure. Each FlashBlade//S chassis typically contains two FIOM-1000 modules for redundant fabric connectivity and they host integrated Ethernet switching and external ports used for connecting the storage blades to client networks and the system fabric. These modules include multiple high-speed ports (e.g., QSFP28 for 100 GbE in existing hardware) and have internal management interfaces (such as management, USB, and console ports) to support chassis networking functions.FB//EXA MetaData Node System Chassis FIOM
630Pure Storage FlashBlade FB-S500R1 bladeFB//EXA MetaData Node SystemPure Storage FlashBlade FB-S500 is a blade-level component used in the FlashBlade//S scale-out unified fast file and object storage platform. It is one of the compute-and-storage blades that populate a FlashBlade//S 5U chassis, delivering high performance for demanding unstructured workloads such as analytics, AI, machine learning, and large-scale file/object stores. A chassis can hold up to 10 blades, each of which connects to Pure’s DirectFlash® Modules (DFMs) and the system’s internal fabric to provide throughput, capacity, and low-latency access across the cluster. The S500 blade emphasizes extreme performance, and works with multiple DFMs per blade to scale I/O and capacity. FlashBlade//S systems support modular expansion of blades and DFMs to meet evolving performance and capacity needs.FB//EXA MetaData Node System blades

Processing Element Notes

No virtualization layers were used; all elements operated on bare-metal hardware and were statically assigned for test reproducibility.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Each of the Supermicro ASG-1115S-NE316R host initiators was equipped with 192 GB (197629740 kB) of DDR5 system memory.188.4744460V11308
Each of the Supermicro SYS-621C-TN12R host initiators was equipped with 1024 GB of DDR5 system memory - reduced to 198752M via GRUB (GRUB_CMDLINE_LINUX_DEFAULT='quiet splash mem=198752M')194.0937520V3881
Each of the Supermicro ASG-1115S-NE316R data nodes was equipped with 192 GB (197629740 kB) of DDR5 system memory.188.4744430V5654
Grand Total Memory Gibibytes20844

Memory Notes

All 80 host initiators had usable memory configurations similar to the following:

# free -h

total used free shared buff/cache available

Mem: 188Gi 10Gi 168Gi 6.5Mi 10Gi 177Gi

Swap: 8.0Gi 0B 8.0Gi

Stable Storage

The system uses an mdadm RAID10-style (RAID0-over-RAID1) configuration of mirrored NVMe drive pairs striped across eight devices with an XFS filesystem on top, providing data protection through mirroring and yielding ~50% usable capacity (≈866 TB usable from 1843.5 TB raw) after accounting for RAID and filesystem overhead.

Solution Under Test Configuration Notes

Details of data network switches:

NVIDIA SN5600 spine data network switch #1: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 61 of 64 front-panel OSFP ports are in use and configured for 800 GbE operation.Of the 64 front-panel OSFP ports, 57 ports are administratively up and operationally up at 800 GbE. 4 ports are administratively down and operationally down. The switch retains 3 front-panel OSFP ports of unused capacity.

NVIDIA SN5600 spine data network switch #2: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 61 of 64 front-panel OSFP ports are in use and configured for 800 GbE operation. Of the 64 front-panel OSFP ports, 56 ports are administratively up and operationally up at 800 GbE. 1 port is administratively up at 800 GbE but operationally down. 4 ports are administratively down and operationally down. The switch retains 3 front-panel OSFP ports of unused capacity.

NVIDIA SN5600 leaf data network switch #1: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 28 front-panel OSFP ports are unused and available for expansion.

NVIDIA SN5600 leaf data network switch #2: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 38 of 64 front-panel OSFP ports are in use. 22 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 26 front-panel OSFP ports are unused and available for expansion.

NVIDIA SN5600 leaf data network switch #3: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 28 front-panel OSFP ports are unused and available for expansion.

NVIDIA SN5600 leaf data network switch #4: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. 1 400 GbE breakout lane is administratively up but operationally down. The remaining 28 front-panel OSFP ports are unused and available for expansion.

NVIDIA SN5600 leaf data network switch #5: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 58 of 64 front-panel OSFP ports are in use. In-use ports consist of 40 × 400 GbE breakout lanes (20 × 2 × 400 GbE) and 16 × 800 GbE ports operating at native speed. 1 additional 800 GbE port is administratively up but operationally down. 6 front-panel OSFP ports are unused and available as unused capacity.

NVIDIA SN5600 leaf data network switch #6: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 58 of 64 front-panel OSFP ports are in use. In-use ports consist of 40 × 400 GbE breakout lanes (20 × 2 × 400 GbE) and 28 × 800 GbE ports operating at native speed. All in-use ports are administratively up and operationally up. 6 front-panel OSFP ports are unused and available as unused capacity.

Other Solution Notes

None

Dataflow

The Namespace and filesystem metadata are served by a Metadata Node system that clients mount via NFSv4.1 over TCP. While the metadata service provides data layout information, host initiators communicate directly with the data nodes using pNFS semantics and RDMA for high-performance data access.

Other Notes

The benchmark was executed using SPECstorage Solution 2020_ai_image workload profile (version 2564) with default warmup (900s) and measurement (300s) durations.

All tests were performed using GA-only firmware (Purity//FB 4.6.4) with no patches.

The AI_IMAGE workload successfully scaled to 6300 jobs under valid SPECstorage Solution 2020_ai_image conditions.

For more information on FlashBlade//EXA, please look at the following URLs:

Pure.AI ( https://www.pure.ai/ )

Meet FlashBlade//EXA. More AI. Less Waiting. ( https://www.youtube.com/watch?v=Df4I-YgEpaY )

Tackling Myths Around AI Data and FlashBlade//EXA ( https://www.youtube.com/watch?v=rBPHCuS6yKQ )

Inside Pure Storage’s FlashBlade//EXA: Scaling AI Without Bottlenecks - Six Five In The Booth ( https://www.youtube.com/watch?v=YDkt43n7E3A )

This is an SSD?! - PureStorage FlashBlade Tour ( https://www.youtube.com/watch?v=L4AKeW0Y-F0 )

Technical Deep Dive on FlashBlade//S ( https://www.purestorage.com/video/technical-deep-dive-on-flashblade/6307195175112.html )

Other Report Notes

None


Generated on Wed Feb 18 10:54:46 2026 by SpecReport
Copyright © 2016-2026 Standard Performance Evaluation Corporation