VAST Amplify's '6x Capacity' Claim: Exploiting a Real Crisis with Fake Math
VAST Data launches 'VAST Amplify' promising 6x effective capacity from existing SSDs during the worst flash shortage in decades. The SSD crisis is real. VAST's math is not. We dismantle the 6x claim, expose the median they're hiding, and explain why Flash Reclaim is a lock-in play disguised as altruism.
VAST Data has found its most cynical marketing angle yet: exploiting a genuine supply crisis to sell the same fraudulent data reduction math we’ve already debunked.
On January 27, 2026, VAST announced “VAST Amplify,” a program claiming to deliver “up to 6x or more effective capacity” from existing SSDs. Two weeks earlier, Blocks and Files ran a breathless piece about VAST’s “Flash Reclaim” initiative, dutifully reporting VAST’s claim of a 3.4x average data reduction ratio across their customer base. StorageReview published a headline declaring VAST “aims to make your existing SSDs feel 6x bigger.” The storage press printed all of it without doing a single calculation.
The SSD supply crisis is real. Enterprise SSD prices have surged, lead times have stretched to a year, and organizations genuinely need strategies to manage capacity. VAST saw this pain and thought: how can we use this to lock customers into our ecosystem while recycling the same inflated data reduction numbers we’ve been peddling for months? The result is Amplify — a program that combines real industry pain with fake math and wraps the whole package in manufactured urgency.
The “6x” Claim: Decomposing the Lie
VAST’s press release states that Amplify “can deliver up to 6x or more effective capacity, depending on workload characteristics and existing environment.” Let’s reverse-engineer how they manufacture this number, because VAST certainly doesn’t show their work.
The 6x claim stacks three multipliers. The first is their data reduction ratio — VAST cites a 3.4x “average” across their customer base. The second is their erasure coding efficiency versus triple replication, the same 2.91x straw man we exposed in our previous analysis. The third is their “SCM-based write efficiency,” which VAST claims extends QLC SSD endurance by absorbing writes into a Storage Class Memory buffer before destaging to flash.
Multiplied together: 3.4 × (some fraction of 2.91) × (some endurance multiplier) = a number VAST rounds up to “6x or more.”
Every single input to this multiplication is either inflated, misleading, or irrelevant to what customers will actually experience.
The Median They’re Hiding: 1.75x, Not 3.4x
VAST’s co-founder Jeff Denworth cites a 3.4x average data reduction ratio from “call home telemetry” across approximately one thousand customers. He also mentions — almost as an afterthought — that the median is 1.75x.
When the mean of a distribution is nearly double the median, you’re looking at extreme right skew. A small number of outlier deployments are dragging the average far above what a typical customer experiences. Denworth helpfully confirms this by citing the highest observed ratio: 724:1. That is not a typo. Seven hundred twenty-four to one.
For 724:1 data reduction to occur, the original data must have been 99.86% redundant. You could achieve this by storing 724 copies of the same file, or by running dedupe across thousands of near-identical VM snapshots, or by having a dataset so pathologically repetitive that it barely qualifies as distinct data. Whatever workload produced 724:1 tells us nothing about enterprise storage in the real world. It tells us that somewhere, someone stored an enormous amount of duplicated garbage, and VAST’s similarity reduction found the duplicates. Congratulations.
But that 724:1 outlier — along with a handful of similarly extreme cases — is what transforms a median of 1.75x into an “average” of 3.4x. This is textbook statistical manipulation. It’s the equivalent of saying “the average wealth of people in this bar is $10 billion” because Jeff Bezos walked in. Technically true. Functionally meaningless. Deliberately deceptive when used to set customer expectations.
The median is what matters. More than half of VAST’s own customers see less than 1.75x data reduction. For the majority of workloads using modern compressed formats like Parquet, ORC, or Zstd-compressed data, this is entirely expected — you can’t compress data that’s already compressed. VAST knows this. Their own wireless telco example from Jeff’s earlier LinkedIn post showed 1.3:1 DRR on ORC-formatted data. That’s the reality for modern data stacks.
Here’s what VAST’s 6x claim looks like with honest inputs:
| Metric | VAST’s Marketing Number | Honest Number |
|---|---|---|
| Data reduction ratio | 3.4x (mean) | 1.75x (median) |
| Erasure coding efficiency | 2.91x (vs HDFS 3x replication) | 1.4x (vs modern HDFS EC) |
| Effective capacity multiplier | “6x or more” | ~2.45x |
Using the median DRR of 1.75x and the honest erasure coding comparison of 1.4x against modern HDFS with RS(6,3), the realistic capacity multiplier for a typical customer is approximately 1.75 × 1.4 = 2.45x. Not 6x. Not “or more.” Less than half of what VAST claims.
And 2.45x is generous, because it still credits VAST with the full erasure coding advantage over HDFS. If you’re migrating from a system that already uses erasure coding — NetApp, Dell PowerScale, Pure FlashBlade, or even HDFS with RS(10,4) — then the “replication efficiency” multiplier drops toward 1.0x, and you’re left with just the data reduction ratio. For a customer running modern compressed formats on modern storage, the realistic capacity advantage of moving to VAST is in the range of 1.3-1.75x. That’s a legitimate engineering improvement. It’s also not the kind of number that generates press coverage, LinkedIn engagement, or $30 billion valuations.
The 2.91x Zombie: A Lie That Won’t Die
We documented the 2.91x straw man in detail last month, but VAST keeps using it because nobody in the storage press calls them on it.
The 2.91x “replication efficiency” multiplier compares VAST’s 150+4 erasure coding (2.6% overhead) against HDFS triple replication (200% overhead). The ratio 3.0 / 1.026 = 2.92x, which VAST rounds to 2.91x. This comparison is a deliberate lie by omission. HDFS has supported erasure coding since Apache Hadoop 3.0 shipped in December 2017 — over eight years ago. The default RS(6,3) configuration gives 50% overhead. RS(10,4) gives 40% overhead. No competent data engineer has deployed new workloads on HDFS 3x replication since 2018.
Against modern HDFS erasure coding, VAST’s advantage drops from 2.91x to approximately 1.4x. Against other enterprise storage systems that already use erasure coding, the advantage drops further or vanishes entirely. Every time VAST uses the 2.91x multiplier — and they bake it into the Amplify “6x” claim — they are comparing their current architecture against a configuration that has been obsolete for eight years. Jeff Denworth knows this. VAST’s engineering team knows this. They use the comparison because it makes the numbers bigger, and because the storage press will print it without asking a single question.
“Flash Reclaim”: Vendor Lock-in Wearing a Lab Coat
The Flash Reclaim component of Amplify is marketed as altruism: bring us your existing SSDs from any vendor’s system, and we’ll put them to work in VAST’s architecture with higher data reduction ratios. In the midst of an SSD supply crisis, this sounds like VAST is helping customers stretch their existing investments.
It is not altruism. It is a rip-and-replace migration program designed to convert competitors’ installed base into VAST customers, timed to exploit maximum supply-chain desperation.
The mechanics require you to decommission drives from your existing storage system — whether that’s Dell PowerStore, NetApp AFF, Pure FlashBlade, or anything else — and insert them into VAST’s DASE architecture. Once your SSDs are “consolidated into the VAST AI OS,” they’re running VAST’s proprietary firmware layer, VAST’s erasure coding, and VAST’s data reduction. Your data is now in VAST’s format, protected by VAST’s proprietary LDEC algorithms, managed by VAST’s software. This is one-way migration. VAST doesn’t offer tooling to extract your data back to a competitor’s format without a full data copy. The SSDs you “reclaimed” are now VAST SSDs in every way that matters. And when the flash shortage ends — as all supply crunches do — you’re locked into VAST’s ecosystem with drives you can’t easily take elsewhere.
Dell’s response to Flash Reclaim was blunt and technically sound. They warned that reusing SSDs from one storage platform in another risks data loss, voids warranties, and introduces endurance uncertainty because the new controller lacks historical wear-leveling data. QLC NAND has limited write endurance, and drives from a decommissioned system may be significantly worn — the receiving system has no visibility into how much life remains. Firmware compatibility is another concern: enterprise SSDs often run vendor-specific firmware customizations that may not translate across platforms. And the data security implications of “recycling” drives between systems without cryptographic erase create leakage risk that most compliance frameworks would flag immediately. These are legitimate engineering concerns, not competitive FUD. Physics doesn’t care about VAST’s marketing timeline.
The timing is not coincidental. VAST launched Amplify during the worst flash shortage in years, when customers are panicking about capacity and procurement teams are under enormous pressure to find solutions. This is the storage equivalent of a payday lender setting up shop next to the unemployment office. The need is real. The solution is predatory.
The SSD Crisis Is Real. VAST’s “Solution” Is Not.
Let’s be clear about what’s actually happening in the flash market, because VAST is exploiting legitimate pain to sell illegitimate math.
TrendForce reports enterprise SSD contract prices rising 53-58% quarter-over-quarter in Q1 2026, marking a new record for quarterly price increases. 30TB TLC enterprise SSDs have gone from approximately $102/TB ($3,062 per drive) in Q2 2025 to approximately $367/TB ($11,000 per drive) in Q1 2026 — a 257% increase that’s independently verified by multiple analyst firms. Lead times for high-capacity enterprise SSDs have stretched to 52 weeks. NAND production capacity is constrained because Samsung, SK Hynix, and Micron are prioritizing HBM manufacturing for AI GPU memory over NAND flash. NAND demand in 2026 is expected to grow 20-22% year over year while supply grows only 15-17%, a widening gap that IDC and TrendForce both project will persist into 2027.
VAST calls this the “worst storage rut in 40-plus years” with a “200 exabyte deficit.” The “40-plus years” framing is theatrical — VAST itself is barely nine years old, so they’re claiming authority over a historical period that predates their existence by three decades. The “200 exabyte” number appears to be VAST’s own extrapolation rather than a direct analyst figure. But the directional reality — demand exceeding supply — is broadly confirmed.
So what should organizations actually do about rising SSD costs? The options are straightforward and don’t require buying into VAST’s ecosystem.
The most obvious response is tiering. Most enterprise data becomes cold quickly. Storing cold data on HDDs — which have seen only 35% price increases compared to SSD’s 257% — is basic capacity management that every major storage platform supports natively. NetApp FabricPool, Dell PowerScale SmartPools, Pure’s cloud tiering, and even HDFS’s built-in storage policies all move cold data to cheaper media automatically. This isn’t novel. This is operational hygiene that has existed for decades.
Compression and deduplication work on any platform. VAST doesn’t have a monopoly on Zstd. The algorithms are open source. Any storage system can compress data, and the ratios you achieve depend on your data characteristics, not your vendor. If you’re already running compressed formats like Parquet or ORC, no storage vendor — including VAST — is going to magically find 3.4x of hidden redundancy. VAST’s own median of 1.75x confirms this.
Buying less flash and using it strategically is the advice that doesn’t generate vendor revenue, which is why no vendor says it. TrendForce and every independent analyst recommends the same thing: buy the minimum flash you need today, tier aggressively to disk, and wait for NAND supply to normalize. Flash prices will come down. They always do. Overbuying now — or migrating to a new platform during a crisis — locks in unnecessary cost and unnecessary risk.
VDURA, to their credit, at least had the honesty to frame their “Flash Relief Program” as a straightforward price undercut: they’ll beat VAST and WEKA quotes by 50% using a hybrid SSD/HDD architecture. Their 25PB example showed an all-flash approach growing from $8.5M to $24.5M between Q2 2025 and Q1 2026, while their mixed-fleet approach (20% SSD, 80% HDD) came in at $6.56M. Whether VDURA delivers on those claims is a separate question, but at least the pitch is “we cost less” rather than “we’ll magically make your SSDs hold 6x more data through proprietary algorithms we won’t let anyone audit.”
The Chris Mellor Problem: When One Author Becomes a Vendor’s Marketing Channel
We’ve called out Blocks and Files’ coverage of VAST in multiple previous articles, but the pattern has reached a point where it demands direct examination — not of the publication, but of the author.
Chris Mellor has covered VAST Data extensively. He covered Flash Reclaim on January 13. He covered VAST Amplify on January 27. He covered the $30 billion funding round on February 5. In none of these articles did he perform basic arithmetic on VAST’s claims. He reported the 3.4x average DRR without noting the 1.75x median or explaining what that statistical gap means for actual customers. He printed the “6x capacity” claim without decomposing how VAST derives it. He presented Flash Reclaim as a competitive market development without analyzing the lock-in implications or the engineering risks Dell raised.
This is not an occasional lapse. This is a pattern that spans years and dozens of articles. Mellor’s VAST coverage consistently exhibits the same characteristics: vendor terminology adopted without definition (“DASE architecture,” “similarity reduction,” “LDEC”), competitive positioning repeated without verification (“legacy parallel file systems are obsolete”), financial claims printed without sources (“margins way past 50 percent”), and extraordinary technical assertions presented as established fact rather than unaudited marketing.
Compare this with what Mellor doesn’t do. He doesn’t calculate whether the claimed numbers are mathematically consistent. He doesn’t ask for methodology behind benchmark claims. He doesn’t consult independent sources or competing vendors for technical counterpoints. He doesn’t distinguish between shipping products and roadmap promises. He doesn’t note when VAST’s claims contradict each other or have shifted between announcements — as when their Kafka performance claim quietly changed from “10x” to “604% more throughput” (7.04x) without explanation.
The storage industry has a small number of journalists who cover it regularly, and readers — especially enterprise buyers making multi-million dollar procurement decisions — place disproportionate trust in these writers. When an author with Mellor’s visibility and reach becomes a reliable conduit for vendor marketing, the effect is not neutral. It actively distorts the information environment that buyers depend on. A CTO reading Blocks and Files might reasonably believe that “6x capacity” is an independently reported finding rather than an unverified marketing claim transcribed from a press release. That CTO might then budget for 6x improvements, commit to a migration, and discover eighteen months later that their real-world ratio is 1.75x. By then, the SSDs are in VAST’s ecosystem, the data is in VAST’s format, and the switching cost is enormous.
Authors who consistently amplify vendor claims without verification should carry a credibility discount in readers’ minds. When you see a Mellor article about VAST, you should read it the way you’d read a sponsored post: possibly containing factual information, but with no guarantee that the claims have been independently examined. This isn’t personal — it’s the natural consequence of a track record. Trust is built through skepticism and verification, not through the volume of articles published.
The broader storage press has the same problem. StorageReview headlined their Amplify coverage “VAST Amplify Aims to Make Your Existing SSDs Feel 6x Bigger” — a marketing headline with someone else’s logo. But Mellor’s coverage stands out because of its consistency across dozens of VAST articles and because Blocks and Files positions itself as independent analysis rather than news aggregation. When your analysis is indistinguishable from the vendor’s marketing department, the “analysis” label becomes misleading.
The Similarity Reduction Question Nobody Asks
VAST’s claimed advantage over standard compression and deduplication is “similarity reduction” — a technique that finds data blocks that are similar but not identical, extracts common patterns, and stores only the deltas. VAST describes this as a breakthrough that “breaks the trade-offs” of traditional data reduction.
Similarity-based deduplication is not new. Dell Data Domain has used variable-length deduplication with similar principles for over a decade in the backup space. Veritas NetBackup, Commvault, and Veeam all employ various forms of similarity detection. The academic literature on delta compression and similarity hashing dates to the 1990s. VAST’s contribution is applying these techniques inline on primary storage using their SCM write buffer as a staging area — a legitimate architectural approach, but not the revolution they market it as.
VAST claims their approach deduplicates data sets “24-40% better” than Dell Data Domain in their own internal testing. Even if we accept this self-reported number at face value, 24-40% incremental improvement over standard dedupe is meaningful but modest. It does not transform a 1.75x median into a 6x headline. And here’s what VAST doesn’t discuss: similarity reduction is computationally expensive. Finding similar-but-not-identical blocks across a global namespace requires significant CPU and memory resources. VAST offloads this to the destage path between SCM and QLC flash, which works under normal write loads. But when sustained write throughput exceeds the destage pipeline’s capacity, the SCM buffer fills, and write performance degrades. VAST publishes no sustained write throughput data under conditions where similarity reduction can’t keep pace with ingestion. They don’t publish degraded-mode performance characteristics. They just promise “6x” and let the press write the headline.
The SCM Supply Chain Irony
VAST’s architecture depends on Storage Class Memory for its write buffer — originally Intel Optane, which Intel discontinued in 2022. VAST has since transitioned to alternative SCM sources, reportedly including Kioxia FL6 drives. The press release for Amplify touts “SCM-based write efficiency” as a key component of the capacity multiplier.
The irony of launching a program to address supply constraints while depending on a technology whose primary supplier exited the market four years ago apparently escapes VAST’s marketing team. VAST says they have “a guaranteed supply chain of alternatives that have equivalent endurance and performance characteristics to Intel Optane.” This claim is unverifiable. The alternative SCM sources have not been independently benchmarked against Optane. The “equivalent performance” assertion comes from the same company that claims 6x capacity and 99.9991% uptime — a company whose relationship with verifiable claims we’ve documented extensively.
If your architecture’s core differentiator depends on a technology category that’s shrinking rather than growing, that’s a risk factor, not a feature. Amplify’s marketing treats SCM availability as solved. The broader industry context suggests it’s an ongoing constraint that VAST would rather not discuss in detail.
The $30 Billion Context
On February 5, Blocks and Files reported that VAST is raising approximately $1 billion in a secondary round at a $30 billion valuation, with the bulk of the cash going to early investors selling shares rather than into VAST’s operations.
This timing matters for understanding Amplify. VAST’s $30 billion valuation — approximately 50x their estimated $600 million revenue — requires sustained narrative momentum. For comparison, Pure Storage trades at approximately 8-10x revenue. NetApp trades at approximately 4-5x revenue. VAST needs to justify a multiple that is five to twelve times higher than established, profitable storage companies.
That narrative needs impressive numbers, expanding market positioning, and the appearance of explosive growth. Amplify generates press coverage, positions VAST as the solution to a crisis, and creates urgency for customer migration — all of which support the valuation story regardless of whether customers actually see 6x capacity improvements. Programs like Amplify exist to maintain the narrative that justifies this multiple. The math behind the marketing is secondary to the marketing itself. We covered the valuation math in detail in our previous analysis.
What 6x Would Actually Require
Let’s work backward from the 6x claim and see what data characteristics are necessary to achieve it.
VAST’s 150+4 erasure coding at 2.6% overhead gives a storage efficiency of approximately 0.974 — you use 97.4% of raw capacity for data. This is genuinely efficient, but it’s a fixed property of the erasure coding configuration, not a variable that changes with Amplify.
To reach 6x “effective capacity,” the remaining multiplier must come from data reduction. If we set the erasure coding efficiency as a 1.0x baseline (since you’re not gaining capacity, just not losing much to parity), then the data reduction ratio required for 6x is simply 6:1. VAST’s own median DRR is 1.75x. Their mean is 3.4x. To hit 6x from data reduction, you need to be well above even the inflated average — deep into the tail of their distribution where the extreme outliers live.
Alternatively, if VAST stacks the 2.91x replication efficiency multiplier (the HDFS 3x straw man), then 6x requires only 6 / 2.91 = 2.06x DRR, which is above the median but plausible for certain uncompressed workloads. This is almost certainly how VAST constructs the number: stack the fraudulent baseline comparison with above-average data reduction and call it “up to 6x.”
But no honest reading of “6x effective capacity” communicates “you need to be migrating from HDFS with triple replication while storing highly compressible uncompressed data.” The headline communicates “your SSDs will hold 6x more data,” which is false for the majority of real-world deployments.
The Actual Math for Real Customers
Here’s what actual customers should calculate before believing anything VAST claims about Amplify.
If you’re running modern compressed formats (Parquet, ORC, Zstd) on a system with erasure coding, your realistic capacity advantage from moving to VAST is approximately 1.3-1.75x from data reduction (reflecting already-compressed data and VAST’s own median DRR) multiplied by 1.0-1.4x from erasure coding efficiency (depending on your current system’s protection scheme). The total realistic range is 1.3x to 2.45x.
If you’re running legacy uncompressed data on HDFS with triple replication — the scenario VAST constructs all their examples around — then yes, you’ll see larger improvements. But moving to literally any modern storage platform with compression and erasure coding would show similar gains. VAST isn’t special in this scenario. Modern engineering is.
| Your Current Setup | Realistic VAST Advantage | VAST’s Claimed Advantage |
|---|---|---|
| Modern EC + compressed formats | 1.3-1.75x | “6x or more” |
| Modern EC + uncompressed data | 1.75-2.5x | “6x or more” |
| HDFS 3x replication + compressed | 1.9-2.6x | “6x or more” |
| HDFS 3x replication + uncompressed | 2.5-4.1x | “6x or more” |
| Pathological legacy configuration | 4-6x+ | “6x or more” |
The “6x or more” only appears in the pathological case — a customer running the most wasteful possible configuration of legacy software with uncompressed data. VAST’s headline number applies to a scenario that describes almost nobody evaluating their product in 2026.
What Honest Marketing Would Look Like
VAST could make defensible claims about Amplify. They could say: “For customers migrating from legacy triplicated systems with uncompressed data, VAST’s architecture can deliver 3-4x effective capacity improvements through modern erasure coding and data reduction. For customers on modern platforms with compressed data, improvements range from 1.3-1.75x, consistent with our median customer experience of 1.75x DRR.”
That would be honest. It would also be unremarkable, which is why VAST doesn’t say it.
Instead, they take the extreme tail of their distribution, stack it with a straw man baseline comparison, add a vague “SCM write efficiency” multiplier, and arrive at “6x or more” — a number that more than half of their own customers will never see, built on a comparison that hasn’t been relevant since the Obama administration.
Conclusion: Same Fraud, New Wrapper
VAST Amplify is not a new program. It’s the same data reduction fraud in a new wrapper, this time exploiting genuine supply-chain panic to create urgency. The “6x” headline is constructed from the same inflated averages, the same obsolete baseline comparisons, and the same refusal to show methodology that we’ve documented across nine previous analyses of VAST’s marketing.
The SSD crisis is real. TrendForce’s price data is independently verified. Organizations genuinely face difficult capacity decisions. And VAST has calculated — correctly, so far — that desperation makes customers less likely to check the math.
But the math doesn’t change because flash prices went up. The median DRR is still 1.75x. The 2.91x replication efficiency multiplier is still a comparison against an eight-year-old configuration. The “6x” is still a number constructed from outliers and straw men. And Flash Reclaim is still a one-way migration into VAST’s proprietary ecosystem, dressed up as crisis relief.
The storage press will continue to print VAST’s numbers. Authors like Chris Mellor will continue to transcribe press releases into articles without performing fifteen minutes of arithmetic. LinkedIn will continue to amplify claims that collapse under basic scrutiny. And customers who don’t do the math will continue to budget for 6x improvements that deliver 1.75x.
Read vendor claims with the same skepticism you’d apply to any extraordinary assertion: show me the methodology, show me the median instead of the mean, show me the independently audited benchmark. When a vendor responds to those requests with press releases instead of data, you have your answer.
We’ll keep doing the math.
References
[1] VAST Data, “VAST Data Launches VAST Amplify to Help Organizations Multiply Effective Flash Capacity Amid Industry-Wide Supply Constraints,” GlobeNewsWire, January 27, 2026.
[2] Blocks and Files, “VAST Data’s Flash Reclaim attacks competitors’ installed bases,” January 13, 2026. https://blocksandfiles.com/2026/01/13/vast-datas-flash-reclaim-attacks-competitors-installed-bases/
[3] StorageReview, “VAST Amplify Aims to Make Your Existing SSDs Feel 6x Bigger,” January 2026. https://www.storagereview.com/news/vast-amplify-aims-to-make-your-existing-ssds-feel-6x-bigger
[4] Blocks and Files, “Dell warns against reusing SSDs as flash shortages bite,” January 15, 2026. https://blocksandfiles.com/2026/01/15/dell-flash-reclaim/
[5] TrendForce, “Memory Price Outlook for 1Q26 Sharply Upgraded,” February 2, 2026. https://www.trendforce.com/presscenter/news/20260202-12911.html
[6] TrendForce, “Memory Makers Prioritize Server Applications, Driving Across-the-Board Price Increases in 1Q26,” January 5, 2026. https://www.trendforce.com/presscenter/news/20260105-12860.html
[7] StorageMath, “VAST Data’s ‘29x Data Reduction’ Claims: The Storage Industry’s Most Brazen Lies.” /posts/vast-data-reduction-claims-misleading-math/
[8] StorageMath, “VAST Data’s $30B Valuation Ignores Operational Complexity Costs.” /posts/vast-data-valuation-operational-complexity-cost/
[9] StorageMath, “The VAST Data Marketing Machine: When Tech Journalism Becomes Promotional Content.” /posts/vast-data-marketing-machine/
[10] VAST Data, “Similarity Reduction: VAST Data’s Report from the Field.” https://www.vastdata.com/blog/similarity-reduction-report-from-the-field
[11] Blocks and Files, “VAST Data plans funding round so early stock holders can get cash,” February 5, 2026. https://blocksandfiles.com/2026/02/05/vast-data-plans-funding-round-so-early-stock-holders-can-get-cash/
[12] VDURA, “Flash Relief Program.” https://www.vdura.com/flash-relief-program/