NetApp AFX's 'Parallel File System Performance' Claims: The Benchmark They Won't Submit
NetApp claims AFX delivers 'all the performance benefits of parallel file systems' while refusing to submit to IO500—the industry standard benchmark for parallel file system performance. Here's why that matters.
NetApp’s marketing for AFX makes a bold claim: the disaggregated ONTAP architecture delivers “all the performance benefits of parallel file systems” while being “simple, secure, and fully integrated” [1]. Jeff Baxter, NetApp’s VP of Product Marketing, called it “a revitalization of NAS” that provides “all those benefits of a parallel file system from an extensibility and scalability and granularity perspective, but doing so with the parallel NFS standards” [2].
This is a testable claim. The IO500 benchmark exists specifically to measure parallel file system performance. It’s the industry standard, maintained by the HPC community, and vendors from DAOS to DDN to WEKA to VAST Data submit results. If NetApp AFX truly delivers parallel file system performance, the proof would be straightforward: submit to IO500 and let the numbers speak.
NetApp hasn’t submitted AFX to IO500. The absence is telling.
What IO500 Measures
IO500 evaluates storage systems across the metrics that actually matter for parallel workloads: sequential read/write bandwidth, random read/write IOPS, metadata operations (file creation, stat, deletion), and find operations (searching directory trees). The benchmark produces a single composite score that reflects balanced performance across all these dimensions [3].
The SC25 results from November 2025 show where vendors actually stand [4]:
In the 10-Node Production list, DAOS systems at Argonne and LRZ lead with scores of 2,885 and 1,008 respectively. DDN EXAScaler follows at 348. WEKA appears at position 11 with a score of 106. VAST Data—a company StorageMath has criticized for unverifiable marketing—at least submitted, placing 17th with a score of 31. Hammerspace achieved position 18 as the “fastest NFS result ever recorded” using pNFS.
NetApp AFX appears nowhere on the list.
The Certification Conflation
NetApp does have a legitimate parallel file system story, but it’s not AFX. The NetApp EF600 combined with BeeGFS—an actual parallel file system—achieved DGX SuperPOD certification with each array delivering 76 GB/s read and 23 GB/s write performance [5]. BeeGFS is open-source parallel file system software that stripes data across multiple storage targets with client-side parallelism.
AFX is not BeeGFS. AFX runs ONTAP, NetApp’s traditional NAS operating system, with a disaggregated architecture that separates compute and storage tiers. When NetApp announces that “AFX is now certified for NVIDIA DGX SuperPOD,” the marketing implies continuity with their proven EF600+BeeGFS performance. But ONTAP and BeeGFS are fundamentally different architectures.
The DGX SuperPOD certification for AFX should include AFX-specific performance numbers comparable to what NetApp published for EF600+BeeGFS. The search for those numbers comes up empty. NetApp achieved certification—meeting NVIDIA’s minimum threshold—but hasn’t published the benchmark data that would allow comparison to their own parallel file system offering.
This is certification theater: achieve a credential with one product, imply that credential transfers to a different product.
The Architecture Gap
The distinction between NFS (even with pNFS extensions) and true parallel file systems isn’t marketing—it’s architecture.
True parallel file systems like Lustre, GPFS, BeeGFS, and WEKA implement client-side striping. The client software understands data layout and issues I/O directly to multiple storage targets simultaneously. A single read operation might pull data from eight or sixteen servers in parallel. Metadata operations use specialized metadata servers optimized for that workload. The client kernel module or FUSE implementation handles the complexity.
NFSv4.2 with pNFS (what AFX uses) provides layout hints that enable parallel data access, but the architecture differs fundamentally. The NFS protocol’s semantics, the metadata path, the consistency model—all were designed for the traditional NAS use case and extended for parallelism rather than built for it from the ground up.
The IO500 results quantify this gap. Hammerspace’s pNFS result—the fastest NFS submission ever—placed 18th in the 10-Node Production list. That’s 19x slower than the leading DAOS result and 3x slower than DDN EXAScaler. Being the fastest NFS implementation is an achievement, but it’s not “all the performance benefits of parallel file systems.”
If AFX performs better than Hammerspace’s pNFS, NetApp should prove it by submitting to IO500. If AFX performs worse, the “parallel file system performance” marketing is misleading.
The 4 TB/s Claim
NetApp claims AFX achieves “up to 4 TB/s cluster throughput (reads)” with 128 nodes [1]. Let’s examine this number.
4 TB/s divided by 128 nodes equals 31.25 GB/s per node. This is plausible for sequential large-block reads—a storage node with modern NVMe drives and 400G Ethernet can achieve this throughput. The math checks out for that specific operation.
But parallel file system performance isn’t just sequential bandwidth. IO500 measures metadata operations, random I/O, and mixed workloads because those reflect how applications actually use storage. A system optimized for streaming reads may struggle with the small-file random I/O patterns common in AI training pipelines or the metadata-heavy operations in software builds.
The 4 TB/s number is almost certainly measured under ideal conditions: large sequential reads with optimal alignment and queue depth. What’s the performance for the AI_IMAGE workload pattern—millions of small files with random access? What’s the metadata operation rate for software build patterns? These questions have answers in SPECstorage and IO500 submissions. NetApp hasn’t provided them for AFX.
The “Simplicity” Misdirection
NetApp contrasts AFX’s “non-stop operations” against parallel file systems that allegedly require “outages and downtime for upgrades and expansions” [1]. This framing misrepresents how parallel file systems work in production.
The world’s largest HPC centers—Oak Ridge, Argonne, Los Alamos, NERSC—run parallel file systems at exabyte scale supporting thousands of researchers. These systems achieve excellent availability through redundant metadata servers, distributed storage targets, and rolling upgrade capabilities. Lustre clusters routinely perform maintenance without taking the entire filesystem offline.
ONTAP does have mature enterprise features for non-disruptive upgrades, snapshots, and high availability. These capabilities have value. But positioning parallel file systems as operationally inferior ignores decades of production deployment at scales exceeding anything ONTAP has demonstrated.
The honest comparison: ONTAP offers better enterprise data management features (integrated snapshots, replication, protocol flexibility). Parallel file systems offer better raw performance at scale. Organizations should choose based on which trade-off matches their workload. Claiming ONTAP delivers “all the performance benefits” while being simpler isn’t a trade-off—it’s marketing fiction.
What Verification Would Look Like
Weka provides the standard NetApp should meet. When Weka claimed SPECstorage leadership, they submitted to an audited benchmark program with full configuration disclosure [6]. Anyone can access spec.org, examine the methodology, and compare results across vendors. That’s verification.
NetApp participates in benchmarks selectively. They’ve submitted ONTAP results to SPECstorage for EDA workloads, achieving 6,300 EDA_BLENDED job sets [7]. This demonstrates willingness to face scrutiny for specific claims. The question is why AFX’s “parallel file system performance” claim doesn’t receive the same treatment.
IO500 submission is straightforward. The benchmark is open, the rules are published, and dozens of vendors participate [3]. DDN, WEKA, VAST, IBM, Quobyte, and even NFS vendors like Hammerspace and Qumulo have submitted. The infrastructure for verification exists. NetApp chooses not to use it for this specific claim.
The Pattern
This is worse than making unverifiable claims. NetApp is making a claim that could be verified—“parallel file system performance”—and declining to submit to the benchmark that would verify it.
When VAST Data claims “25% faster than Iceberg,” we can’t verify it because no standard Iceberg benchmark exists. The claim is unverifiable by nature. When NetApp claims “parallel file system performance,” the IO500 benchmark exists specifically to verify such claims. The claim is unverifiable by choice.
That choice tells us something. Vendors submit to benchmarks when the results support their marketing. They avoid benchmarks when the results wouldn’t. NetApp’s IO500 absence suggests AFX’s parallel workload performance doesn’t match the claim.
For Organizations Evaluating AFX
If you’re considering NetApp AFX for workloads that would traditionally use parallel file systems, ask NetApp directly:
What is AFX’s IO500 score? If they haven’t submitted, ask why not. If they claim the benchmark doesn’t apply, ask what benchmark does measure the “parallel file system performance” they’re claiming.
What are the specific performance numbers for the DGX SuperPOD certification? Not the EF600+BeeGFS numbers—the AFX numbers. If AFX matches EF600+BeeGFS performance, those numbers should be publishable.
How does AFX perform on small-file random I/O patterns typical of AI training? The 4 TB/s sequential read claim doesn’t answer this question. AI training pipelines often access millions of small image files with random patterns.
What’s the metadata operation rate? Parallel file systems optimize for metadata-heavy workloads through dedicated metadata servers. How does ONTAP’s metadata performance compare?
If NetApp provides verifiable answers with published methodology, evaluate accordingly. If the answers are “trust us” or “our internal testing shows,” apply appropriate skepticism. The benchmark infrastructure exists. Vendor participation is a choice.
The Honest Claim
NetApp AFX is a disaggregated scale-out NAS with enterprise data management features. It supports pNFS for improved parallel access compared to traditional NFS. It scales to 128 nodes and targets AI workloads that need ONTAP’s operational capabilities.
That’s a legitimate value proposition for organizations that prioritize enterprise features over raw performance. Many workloads benefit more from integrated snapshots, replication, and multi-protocol support than from maximum I/O throughput.
The dishonest claim is “all the performance benefits of parallel file systems.” That claim has a verification mechanism—IO500—and NetApp hasn’t submitted. Until they do, the claim remains marketing rather than engineering.
References
[1] NetApp, “NetApp AFX: Data infrastructure for enterprise AI.” https://www.netapp.com/afx/
[2] HPCwire, “NetApp Takes On Parallel File Systems with Disaggregated NAS Cluster,” October 14, 2025. https://www.hpcwire.com/2025/10/14/netapp-takes-on-parallel-file-systems-with-disaggregated-nas-cluster/
[3] IO500, “Submission Rules and Methodology.” https://io500.org/rules/submission
[4] StorageNewsletter, “SC25: The IO500 Lists Show Interesting Results as Usual,” November 25, 2025. https://www.storagenewsletter.com/2025/11/25/sc25-the-io500-lists-show-interesting-results-as-usual/
[5] NetApp, “NetApp EF600 NVMe combined with BeeGFS file system certified for NVIDIA DGX SuperPOD.” https://www.netapp.com/newsroom/press-releases/news-rel-20250318-592455/
[6] Weka, “WEKA Dominates SPECstorage Benchmark, Shattering Records.” https://www.weka.io/blog/cloud-storage/weka-dominates-specstorage-benchmark-shattering-2020-records/
[7] NetApp Blog, “SPEC Benchmark: NetApp Claims Top Spot in EDA Solutions.” https://www.netapp.com/blog/spec-bechmark-eda-solutions-top-spot/