Hammerspace's 'Standard NFS' Achievement: A Technical Reality Check

Hammerspace achieved #18 on IO500 using pNFS v4.2, claiming this proves 'standard protocols' can deliver HPC-class performance. The technology is legitimate. The positioning about 'standard' is misleading.

Hammerspace announced in November 2025 that it achieved #18 on the IO500 10-node production benchmark using “standard Linux, upstream NFSv4.2 client, and commodity NVMe flash.” The company positioned this as proving that standard protocols can deliver HPC-class performance without proprietary parallel file systems.

The achievement is real. The framing requires clarification: pNFS v4.2 with Flex Files is not “standard NFS,” and Hammerspace’s architecture is not simply Linux + NFS.

What Hammerspace Actually Uses

Hammerspace’s architecture consists of:

  1. Global distributed metadata plane: A centralized metadata control system that presents a unified namespace across multiple data silos
  2. pNFS v4.2 with Flex Files protocol: This is parallel NFS, not standard sequential NFS
  3. Data Service eXtensions (DSX) nodes: Specialized nodes that handle I/O operations, replication, and data movement, scaling up to 60+ nodes per cluster

The architecture is more accurately described as “standard protocol + distributed metadata control plane + specialized I/O nodes” rather than “Linux + standard NFS.”

The Technical Achievement

To be fair, what Hammerspace accomplished is legitimate:

pNFS v4.2 with Flex Files enables clients to access multiple storage devices directly in parallel, bypassing the single-server bottleneck that limited NFS v4.0 and v4.1. This is a genuine improvement in the NFS protocol introduced in 2025 that addresses historical scalability limitations.

NFSv4.2 supports compound operations, which reduces metadata latency by combining multiple server requests into single round trips. This helps with workloads that issue rapid metadata sequences.

Hammerspace’s implementation demonstrates that a NFS-based system can achieve benchmark performance previously associated only with proprietary parallel file systems like Lustre, WEKA, or VAST Data.

What the IO500 Ranking Actually Shows

Hammerspace achieved #18 overall in the IO500 10-node production category. In context:

A #18 ranking is respectable for demonstrating HPC-class performance. It is not “fastest” in any absolute sense. It indicates that pNFS v4.2 can achieve sufficient performance to enter the competitive field.

The Hardware Configuration Comparison

Hammerspace claims to deliver “2x the IO500 10-node challenge score and 3x the bandwidth of VAST Data using 9 nodes versus VAST’s 128 nodes.”

This comparison uses markedly different deployment scales:

When comparing systems at dramatically different scales, efficiency (performance per unit of infrastructure) differs from absolute performance. Hammerspace’s efficiency advantage is relevant for deployments that don’t require 128-node scale.

For organizations deploying at VAST’s scale (128 nodes or more), the comparison may not apply. Per-node efficiency advantages at 9 nodes do not necessarily extrapolate to systems running 5-10x larger.

pNFS v4.2 Is Not “Standard NFS”

Hammerspace’s marketing describes this as “standard” protocol achievement. Technically, pNFS v4.2 with Flex Files represents a specific implementation of the NFSv4 specification introduced in 2025.

Standard NFS in most HPC deployments refers to sequential NFS v3 or NFS v4.0/v4.1, all of which have documented scalability limitations:

Using pNFS v4.2 is opting into a specific advanced implementation, not using standard NFS.

The distinction matters because:

  1. Deploying pNFS v4.2 Flex Files requires specific infrastructure support (storage devices must expose pNFS layouts)
  2. Client support varies by Linux distribution and kernel version
  3. Troubleshooting complex pNFS scenarios requires different expertise than standard NFS
  4. Operational characteristics differ from standard NFS (e.g., handling storage device failures)

What Hammerspace’s Approach Actually Solves

Hammerspace’s value proposition is real but narrower than the marketing suggests:

For organizations needing distributed metadata across multiple storage silos without deploying a proprietary parallel file system, pNFS v4.2 can be simpler than Lustre or custom solutions.

For smaller deployments (9-10 nodes), Hammerspace demonstrates competitive performance without the operational complexity of large-scale Lustre deployments.

For multi-site environments where global namespace is required, Hammerspace’s distributed metadata synchronization adds value beyond what standard NFS provides.

What It Does Not Solve

Hammerspace’s approach has limitations that proprietary parallel file systems address differently:

Metadata latency: While pNFS v4.2 reduces latency through compound operations, metadata-heavy workloads (millions of small files, rapid stat() calls) may still prefer systems optimized for metadata with in-memory structures.

Consistency semantics: NFS has weaker consistency guarantees than some parallel file systems. For workloads requiring strict consistency across distributed access, this matters.

Specialized optimization: Lustre, WEKA, and VAST Data each include optimizations for specific workload patterns (e.g., WEKA’s 4KB striping for small random I/O). pNFS v4.2 applies the same I/O strategy regardless of workload.

The Series B Valuation Context

Hammerspace raised $100 million in Series B funding (April 2025) with Altimeter Capital leading, at a company valuation of approximately $1 billion based on public reporting.

This valuation reflects investor confidence in the market opportunity for distributed data environments and multi-site storage. The IO500 benchmark achievement, while solid, is a component of a larger business case including enterprise customer adoption and market timing.

Why the Positioning Matters

Calling pNFS v4.2 “standard NFS” performs two rhetorical functions:

  1. Implies simplicity: “Standard” suggests no specialized implementation is required, when pNFS v4.2 deployment requires specific infrastructure and client support
  2. Distances from proprietary alternatives: Emphasizing “standard” and “open” protocols is valuable positioning against proprietary competitors, but obscures that Hammerspace adds a significant proprietary metadata layer on top

The accurate positioning would be: “Distributed metadata system built on pNFS v4.2 protocol with specialized infrastructure nodes delivers competitive HPC performance.”

This is less marketing-friendly but technically precise.

Conclusion

Hammerspace’s IO500 achievement demonstrates that NFSv4.2 with pNFS Flex Files and a distributed metadata layer can achieve HPC-class performance. This is a genuine technical accomplishment and validates the pNFS v4.2 protocol design.

However, this is not equivalent to proving that “standard NFS” solves HPC storage problems. pNFS v4.2 is an advanced implementation requiring specific infrastructure, different operational expertise than traditional NFS, and deployment constraints not present with standard sequential NFS.

For the specific use case of distributed multi-site environments with moderate per-site performance requirements, Hammerspace’s approach may be preferable to operational complexity of large Lustre deployments or the cost of proprietary solutions.

For organizations with high-scale centralized HPC clusters (100+ nodes), the evaluation should focus on whether pNFS v4.2 efficiency at smaller scales translates to their deployment model, not on the claim that “standard protocols” prove proprietary file systems unnecessary.