When 53% of Vendors Are 'Leaders': The GigaOm Primary Storage Radar and Analyst Report Theater
Analysis of GigaOm's Primary Storage Radar methodology - where 10 out of 19 vendors achieve 'Leader' status, 'Outperformer' means predicted future development, and customers need to understand what radar positioning actually tells them.
GigaOm’s latest Primary Storage Radar positions Pure Storage and Hitachi Vantara as the top vendors, with Dell, NetApp, VAST Data, and HPE close behind [1]. The report evaluates 19 vendors across key features, emerging features, and business criteria. Pure Storage edges ahead on unweighted scores, followed by Dell and Hitachi Vantara in third. The graphic shows concentric circles with leaders clustered near the center, challengers further out, and entrants at the periphery.
This sounds like useful analysis until you examine the methodology. Ten of nineteen vendors - 53% of participants - land in the “Leaders” ring. Hitachi Vantara and HPE receive “Outperformer” status based on “expected fast development over the next 12 to 18 months.” The report merges what were previously separate enterprise and mid-market radars into a single graphic, eliminating whatever workload-specific guidance those separate reports might have provided.
GigaOm analyst Whit Walters comments that “storage decisions now require attention from senior leadership because these platforms directly impact an organization’s ability to deploy AI applications and recover from cyberattacks” [2]. This statement is accurate. The question is whether a radar diagram where half the vendors are leaders helps senior leadership make better decisions.
The Leader Inflation Problem
When 53% of evaluated vendors achieve “Leader” status, the designation loses discriminating value. The term implies standing apart from competitors, but GigaOm’s methodology produces a crowded winners’ circle where Pure Storage, Hitachi Vantara, Dell, NetApp, VAST Data, HPE, and four others all claim the same top-tier position. The visual differentiation comes from slight positioning differences within the Leaders ring - Pure Storage slightly closer to center than Hitachi Vantara, for example.
This inflation stands out even within the analyst industry. Gartner’s 2024 Magic Quadrant for Primary Storage Platforms named five Leaders: Pure Storage, HPE, NetApp, IBM, and Dell [3]. That’s 5 out of roughly 12 evaluated vendors - selective enough to retain meaning. GigaOm’s 53% leader rate is notably more permissive. The industry dynamic is understood: vendors pay to participate, vendors use “Leader” positioning in marketing materials, and generous leader designations keep paying participants satisfied while maintaining the appearance of rigor.
The mathematics of differentiation work against this approach. If you’re evaluating storage for a specific deployment and ten vendors are “Leaders,” you haven’t narrowed your selection. You’ve eliminated only the nine vendors who either couldn’t afford the report fee, chose not to participate, or genuinely lack competitive offerings. The radar doesn’t tell you which of the ten leaders fits your workload, budget constraints, or operational requirements.
Consider what “Leader” should mean in a market assessment. The word implies front-runners - a small group that has pulled ahead of competitors on meaningful metrics. When Gartner designates 5 of 12 vendors as leaders (42%), that’s aggressive but defensible - nearly half the market might genuinely offer leading capabilities. When GigaOm designates 10 of 19 vendors as leaders (53%), the term starts to lose meaning. You’re no longer identifying exceptional vendors; you’re confirming that most established players meet baseline requirements.
The Outperformer Speculation
GigaOm awards “Outperformer” status to Hitachi Vantara and HPE in the Leaders ring, plus WEKA in the Challengers ring, based on expected development velocity over the next 12-18 months [2]. This designation appears alongside the current-state evaluation, implying that these vendors will improve faster than competitors. The basis for this prediction isn’t disclosed in the public materials.
Predicting vendor development trajectories requires information that analysts rarely possess: internal roadmaps, engineering team capabilities, R&D budget allocation, competitive response strategies, and execution track records. Vendors share sanitized roadmap information with analysts under NDA, but this information is inherently promotional - vendors emphasize planned features and minimize delays, pivots, and deprioritizations.
Hitachi Vantara has received “Leader and Outperformer” designation for two consecutive years [2]. If they were outperforming in 2024’s assessment, they should have moved further ahead of competitors by 2025. Did they? The current report positions them second behind Pure Storage. The “Outperformer” designation appears to roll forward regardless of whether the predicted outperformance materialized.
This pattern creates a self-reinforcing dynamic. Vendors showcase “Outperformer” status in marketing materials. Customers see the designation as analyst endorsement. The vendor gets included in more evaluations. Revenue grows. Next year, the analyst maintains the designation because the vendor remains a significant paying participant. At no point does anyone verify whether the predicted outperformance occurred.
Storage professionals making purchasing decisions need current capabilities, not speculative futures. If HPE’s storage will be dramatically better in 18 months, that matters for purchases happening in 18 months. Today’s purchase needs today’s assessment. The “Outperformer” designation conflates these timelines in ways that serve marketing more than evaluation.
Methodology Weighting Games
GigaOm’s radar positions vendors based on weighted scores across three categories: key features, emerging features, and business criteria [1]. Key features and business criteria receive higher weighting; emerging features receive lower weighting. This approach inherently advantages incumbents with mature feature sets and established business operations over newer entrants who might solve problems differently.
The specific weights aren’t published in public materials. Customers can’t determine whether Pure Storage’s slight edge over Hitachi Vantara reflects meaningful differentiation or weighting choices that happen to favor Pure’s feature mix. Different weighting schemes would produce different rankings from the same underlying scores.
This matters because weighting embeds assumptions about what customers value. If an analyst weights “AI/ML workload optimization” at 15% and “mature replication features” at 25%, incumbent vendors with years of replication development beat newer vendors with AI-focused architectures. If the weights reverse, the ranking changes. Neither weighting is objectively correct - they reflect the analyst’s assumptions about market priorities.
Organizations with specific requirements need evaluation frameworks that match their priorities. A media company optimizing for large sequential reads has different needs than a financial services firm optimizing for transactional latency. A radar that averages across all use cases provides no guidance for either. The weighting methodology determines outcomes without necessarily reflecting any particular customer’s actual needs.
The business criteria category introduces additional opacity. What business factors get scored? Vendor financial stability? Support quality? Ecosystem partnerships? Pricing competitiveness? Each factor could justify different weights, and the specific choices significantly impact final positioning. Without methodological transparency, the radar functions as analyst opinion with quantitative decoration rather than rigorous assessment.
The Workload Specificity Problem
GigaOm merged separate enterprise and mid-market primary storage radars into a single report covering “traditional applications, hybrid cloud environments, AI/ML workloads and edge computing” [1]. This consolidation eliminates workload-specific guidance that separate reports might have provided.
Primary storage for traditional Oracle databases has fundamentally different requirements than primary storage for Kubernetes-native applications. AI training checkpoint storage differs from VDI boot storm optimization. Edge deployments face constraints that enterprise data centers don’t encounter. A single radar cannot meaningfully evaluate all these use cases simultaneously.
The merger likely reflects business reality. Producing separate reports costs more. Vendors prefer appearing in fewer, higher-profile reports rather than multiple specialized assessments. Customers gravitate toward comprehensive reports that seem to answer all their questions at once. But comprehensiveness trades against specificity. A report covering everything evaluates nothing in particular.
Consider a customer evaluating storage for AI training workloads. They need high sequential throughput, checkpoint recovery performance, and GPU cluster integration. The GigaOm radar positions VAST Data in the Leaders ring - relevant for AI workloads. It also positions Dell, NetApp, and HPE as Leaders - vendors with strong traditional enterprise offerings but varying AI optimization. The radar provides no differentiation between these use cases. A customer selecting based on Leader status might choose any of them and discover post-deployment that their specific workload performs poorly.
Analyst reports that claim to cover “AI/ML workloads” alongside “traditional applications” treat these as comparable categories. They’re not. AI training storage operates at fundamentally different scales and access patterns than transactional database storage. Evaluating both in a single framework requires either making the framework so generic it provides no guidance, or making assumptions that favor one category over another. The resulting radar helps neither customer segment effectively.
What Analysts Actually Provide
Before examining the business model, acknowledge what analyst reports genuinely offer. GigaOm’s radar consolidates information about 19 vendors into a single document - research that would take an internal team weeks to compile independently. The evaluation criteria, even if imperfectly weighted, provide a framework for thinking about vendor capabilities. The report filters out vendors who lack enterprise credibility, saving evaluation time. Analysts often have access to vendor roadmaps and technical details that customers can’t easily obtain.
These benefits are real. The question isn’t whether analyst reports have value - they do. The question is whether the value justifies treating radar positioning as authoritative guidance for purchasing decisions, and whether the business model creates distortions that customers should understand.
The Business Model Reality
Analyst reports operate on a business model that creates inherent tensions. Vendors pay to participate in evaluations. Vendors pay to license reports for marketing use. Vendors pay for analyst consulting time. The analysts producing the reports depend on vendor revenue to sustain their businesses.
This doesn’t mean analyst reports are fabricated. Analysts maintain reputations by producing defensible assessments. A report that inexplicably positioned a weak vendor as a leader would damage analyst credibility. But the incentive structure creates pressure toward leader inflation, gentle criticism, and methodology opacity.
Vendors who pay for reports expect to appear favorably. If GigaOm produced a radar where 19 vendors participated and 2 were leaders, 17 vendors would question their participation fee value. Next year, fewer would participate. Revenue would decline. The business model requires keeping most participants happy enough to return, which means most participants need to feel their positioning justified the investment.
This explains why 53% leader rates persist across the analyst industry. It’s not that 53% of storage vendors have genuinely superior offerings. It’s that the business model requires keeping paying participants satisfied while maintaining enough differentiation to make the reports seem useful. The equilibrium produces crowded leader categories where real differentiation gets lost.
Customers using analyst reports for evaluation should understand this dynamic. The reports aren’t independent consumer protection assessments. They’re vendor-funded market research that serves analyst business development alongside customer education. The information has value, but the incentive structure shapes what information appears and how it’s presented.
What Would Actually Help
Organizations evaluating primary storage need information that radar diagrams don’t provide.
Workload-specific performance data matters. How does each platform perform for your dominant access patterns? Sequential vs. random, read vs. write, large vs. small objects. Generic “performance” scores averaged across workload types obscure whether a platform matches your specific requirements.
Total cost of ownership over realistic timeframes provides essential context. Initial acquisition cost, ongoing licensing, capacity expansion, staff training, migration complexity. A platform that appears cheapest on initial purchase might cost more over five years. Radar positioning based on “business criteria” doesn’t surface these calculations.
Operational complexity determines real-world success. How many staff hours does management require? What’s the learning curve for operations teams? How does the vendor handle support escalations? These factors often matter more than feature differentials that radar charts emphasize.
Failure mode behavior affects reliability more than checkbox features. How does the platform perform during degraded operation? What happens when a controller fails during peak load? How quickly can data be recovered from backup? No vendor gets radar points for transparent failure mode documentation.
Reference customers with similar deployments provide insight that analyst assessments can’t match. An organization running the same workload on the same platform at similar scale can describe real-world experience that methodology-weighted feature scoring cannot capture. Radar reports don’t facilitate these connections.
The Alternative to Analyst Theater
Skip the radar. Identify your requirements: workload characteristics, performance needs, capacity requirements, budget constraints, operational capabilities. Create a weighted evaluation framework that reflects your specific priorities, not an analyst’s assumptions about market-wide importance.
Request specific information from vendors: benchmark data for your workload patterns, reference customers you can contact, detailed pricing for your projected capacity, support escalation procedures. Evaluate responses against your framework, not against analyst positioning.
Conduct proof-of-concept testing with shortlisted vendors. Real performance in your environment matters more than analyst scores based on vendor-supplied specifications. POC testing reveals operational complexity, integration challenges, and support quality that radar positioning cannot capture.
Talk to peers running similar deployments. Storage engineers who’ve operated a platform for years have insights that vendors won’t share and analysts don’t possess. Industry forums, user groups, and professional networks provide access to unfiltered operational experience.
This approach requires more work than reading a radar report. It doesn’t produce a simple graphic showing which vendor is “closest to optimal.” But it generates information relevant to your specific decision rather than averaged assessments that serve no particular customer’s actual needs.
What the Radar Actually Tells You
The GigaOm Primary Storage Radar confirms that Pure Storage, Hitachi Vantara, Dell, NetApp, VAST Data, and HPE all have primary storage offerings that meet baseline enterprise requirements. This isn’t controversial - these vendors have shipped enterprise storage for years (or in VAST’s case, built offerings specifically for enterprise markets).
The radar indicates that GigaOm’s methodology, with its specific weighting of key features, emerging features, and business criteria, produces rankings where Pure Storage scores slightly higher than Hitachi Vantara. Whether that methodology matches your evaluation priorities is unknowable without access to the specific weights and scoring criteria.
The “Outperformer” designation tells you that GigaOm analysts believe Hitachi Vantara and HPE will develop faster than competitors over the next 12-18 months. Whether this prediction proves accurate remains to be seen. Whether it matters for your current purchasing decision is unclear.
The merging of enterprise and mid-market radars tells you that GigaOm considers these segments sufficiently similar to evaluate together. Whether your deployment fits the assumptions underlying that merger is for you to determine.
What the radar doesn’t tell you: which platform performs best for your specific workload, which vendor’s pricing model matches your budget constraints, which operations team will support you effectively when problems occur, and which platform you’ll regret choosing three years from now. These questions require investigation that radar reports can’t provide.
The Vendor Marketing Cycle
Pure Storage will now market their “#1 positioning in GigaOm’s Primary Storage Radar.” Hitachi Vantara will emphasize their “Leader and Outperformer” status. Dell, NetApp, HPE, and VAST Data will highlight their Leader positioning. The nine vendors outside the Leaders ring will either find favorable quotes to extract or quietly decline to mention the report.
Next year, the cycle repeats. Vendors pay to participate. Analysts produce a radar with methodology that ensures most participants feel adequately recognized. Vendors market their positioning. Customers see the marketing and assume analyst endorsement reflects independent evaluation. The system perpetuates because all parties - analysts, vendors, and even customers seeking simple answers to complex questions - derive value from the process.
The problem isn’t that the information is fabricated. Pure Storage genuinely has strong primary storage offerings. So do Hitachi Vantara, Dell, NetApp, and others. The problem is that radar positioning conflates vendor marketing with technical evaluation, methodology choices with objective truth, and analyst business models with customer interests.
Storage decisions should be based on your requirements, your workloads, and your operational capabilities - not on whether a vendor landed slightly closer to the center of a circle where 53% of participants achieved Leader status.
Because when half your market are “Leaders,” nobody is.
References
[1] Blocks and Files, “Pure Storage and Hitachi Vantara lead GigaOm’s Primary Storage Radar rankings,” December 16, 2025. https://blocksandfiles.com/2025/12/16/pure-storage-and-hitachi-vantara-lead-gigaoms-primary-storage-radar-rankings/
[2] PR Newswire, “GigaOm Radar Recognizes Hitachi Vantara as a Leader and Outperformer in Primary Storage for Second Consecutive Year,” December 2025. https://www.prnewswire.com/news-releases/gigaom-radar-recognizes-hitachi-vantara-as-a-leader-and-outperformer-in-primary-storage-for-second-consecutive-year-302640579.html
[3] Blocks and Files, “Gartner moves Magic Quadrant goalposts for primary storage,” September 2024. https://blocksandfiles.com/2024/09/23/gartners-array-supplier-magic-quadrant/
StorageMath analyzes vendor claims, analyst reports, and industry marketing with equal scrutiny. GigaOm produces useful research. Their radar methodology creates leader inflation that serves the analyst business model more than customer decision-making. Organizations evaluating storage should use analyst reports as one input among many, not as authoritative rankings.