Pure Storage's Recovery Speed Claims: Real Numbers, Missing Context

Pure Storage publishes actual database recovery benchmarks — 60 TB/hr for Oracle RMAN, 113 TB/hr aggregate for SQL Server — and argues that recovery speed, not backup speed, defines data protection. The numbers are plausible. The thesis is correct. But the methodology gaps and missing comparisons leave important questions unanswered.

Pure Storage published “Why Data Protection is Defined by Recovery Speed” in February 2026, arguing that the storage industry’s obsession with backup windows is misguided. The real metric that matters, Pure claims, is how fast you can restore when disaster strikes. Unlike most vendor blog posts that traffic in vague superlatives, Pure actually provides specific performance numbers: over 60 TB/hr restore throughput for Oracle RMAN and over 113 TB/hr aggregate restore throughput for SQL Server across eight concurrent instances.

Credit where due: Pure showed their work. Now let’s check it.

The Thesis: Correct and Underappreciated

Pure Storage’s core argument deserves credit because it identifies a genuine problem in enterprise data protection. Organizations historically optimize for backup windows — can we complete backups within the overnight maintenance window? — while treating recovery as an afterthought. The assumption is that restores are the inverse of backups: if you can back up a 10 TB database in 2 hours, surely you can restore it in roughly the same time.

This assumption is wrong, and Pure is right to challenge it. Restore operations are typically slower than backups for well-understood reasons. Backup reads from a live database sequentially and writes to a target. Restore must write data back, replay transaction logs, verify consistency, and bring the database online — a process that involves more random I/O, more metadata operations, and more computational overhead than the original backup. The “backup-recovery gap” is real.

For Oracle RMAN restores, the bottleneck is often the target storage’s write throughput and IOPS during the data file restoration phase, followed by the redo log application phase where random I/O dominates. For SQL Server, the restore process involves writing data pages back to the database files and then rolling forward transaction log backups. In both cases, the recovery path stresses storage differently than the backup path.

Recognizing that recovery performance is the metric that matters for business continuity is a legitimate insight. When the CFO asks “how long until we’re back online?”, the answer depends on restore speed, not backup speed.

The Numbers: Plausible, With Caveats

Pure reports over 60 TB/hr restore throughput for Oracle RMAN using distributed RAC channels on FlashBlade. This translates to approximately 16.7 GB/s sustained write throughput during the restore operation, implying a 10 TB database recovery in roughly 10 minutes.

For SQL Server, Pure reports over 113 TB/hr aggregate restore throughput across eight concurrent instances with compression enabled, and over 89 TB/hr without compression. Per-instance, that’s approximately 14.1 TB/hr (3.9 GB/s) with compression and 11.1 TB/hr (3.1 GB/s) without.

These numbers are plausible for a FlashBlade system with sufficient blade count and network bandwidth. FlashBlade//S can deliver 17+ GB/s per chassis in a dense configuration, and FlashBlade//E extends this further. The aggregate throughput of 31.4 GB/s for the SQL Server test would require a well-provisioned multi-chassis deployment, but it’s within the platform’s architectural capability.

The compression result is interesting and internally consistent: compressed restore running faster (113 TB/hr) than uncompressed (89 TB/hr) makes sense because compressed backups transfer less data across the network, with the CPU overhead of decompression being offset by reduced I/O. This is a detail that wouldn’t appear in fabricated numbers — it reflects real-world behavior, which increases confidence in the reported results.

Pure tested using native T-SQL backup and restore of a 1 TB database to SMB file shares and S3 object storage on FlashBlade, scaling to eight concurrent instances for aggregate throughput measurement. The methodology measures real wall-clock time — database size divided by operation duration — rather than synthetic I/O rates. This is the right approach for data protection benchmarking, where total recovery time matters more than peak IOPS.

What’s Missing from the Methodology

Pure provides numbers but leaves several critical variables unspecified, limiting the reproducibility and comparability of the results.

The FlashBlade model and configuration aren’t disclosed. A 7-blade FlashBlade//S has different throughput characteristics than a 15-blade FlashBlade//S or a FlashBlade//E. Blade count, network connectivity (25GbE, 50GbE, 100GbE), and NFS/SMB mount configuration all affect achievable throughput. Without these details, a customer cannot predict whether their FlashBlade deployment will match the published numbers.

For Oracle RMAN, the channel count and parallelism settings aren’t specified. RMAN restore performance is highly sensitive to the number of channels, the degree of parallelism, and whether backups were multiplexed. A restore using 16 RAC channels will dramatically outperform 4 channels, and the optimal channel count depends on the storage target’s concurrency characteristics. Publishing 60 TB/hr without channel configuration is like publishing a car’s top speed without mentioning the gear.

The test database composition matters. A 1 TB SQL Server database consisting primarily of sequential data (large tables with clustered indexes) will restore differently than a 1 TB database with heavy fragmentation, many small tables, and extensive index structures. The data file layout, filegroup configuration, and number of files per filegroup all affect restore I/O patterns.

Pure doesn’t disclose whether the results represent a single test run, the best of multiple runs, or an average across runs. Statistical rigor requires reporting variability — even a median and range would help customers set realistic expectations.

Where the Comparison Falls Short

Pure frames the argument as “FlashBlade versus backup appliances” without naming specific alternatives or providing head-to-head comparisons.

The competitive landscape for database backup and recovery targets includes NetApp ONTAP (with SnapRestore for near-instantaneous volume-level recovery from snapshots), Dell PowerProtect DD (with 68.7 TB/hr maximum aggregate throughput in current-generation appliances), Cohesity DataProtect, and purpose-built database protection solutions like Actifio (now Google Cloud Backup and DR) and Commvault’s IntelliSnap with hardware snapshots.

NetApp’s SnapRestore approach is architecturally different — “recovery” means reverting a volume to a snapshot state, which completes in seconds regardless of database size. For Oracle and SQL Server databases running directly on ONTAP storage, the recovery time is dominated by the database’s own crash recovery process (replaying redo/transaction logs since the snapshot), not by data transfer. This makes Pure’s throughput-based comparison inapplicable for snap-based recovery scenarios.

Dell PowerProtect DD’s published throughput specifications for restore operations would provide a direct comparison point. If Pure’s FlashBlade genuinely outperforms purpose-built backup appliances on restore throughput, a head-to-head comparison would be the most compelling evidence possible.

Without these comparisons, Pure’s numbers exist in a vacuum. 60 TB/hr is impressive, but impressive compared to what? A customer evaluating FlashBlade against NetApp or Dell for data protection needs relative performance, not absolute numbers.

The Platform Consolidation Argument

The strategic subtext of Pure’s article is platform consolidation: use FlashBlade as both your high-performance storage tier and your data protection target, eliminating the need for a separate backup appliance. This is a legitimate architectural proposition with real advantages — fewer systems to manage, one vendor relationship, shared capacity — and real trade-offs — no air gap between production and backup, single-vendor dependency, cost structure differences.

Pure doesn’t make this argument explicitly, which is a missed opportunity. A transparent analysis of the total cost of ownership for FlashBlade-as-backup-target versus a dedicated backup appliance, including hardware cost, software licensing, operational overhead, and rack space, would be more useful to customers than throughput numbers alone.

The Bottom Line

Pure Storage’s recovery speed blog post is better than the typical vendor fare. The thesis that recovery speed defines data protection is correct and underappreciated. The specific benchmark numbers — 60 TB/hr Oracle RMAN, 113 TB/hr SQL Server aggregate — are plausible and show internal consistency. The testing methodology uses real database operations rather than synthetic benchmarks.

But the article falls short of what it could be. The FlashBlade configuration isn’t disclosed. The RMAN channel configuration isn’t specified. There are no head-to-head comparisons against competing platforms. And the strategic argument for platform consolidation is implied rather than demonstrated with cost analysis.

Pure has the engineering and the numbers to make a strong, verifiable case for FlashBlade as a data protection platform. They’re 60% of the way there — further than most vendors get. Publishing the full test configuration, running comparable tests against NetApp SnapRestore and Dell PowerProtect DD, and disclosing the cost comparison would close the remaining gap between good marketing and compelling evidence.

References: