Keka HR scores 100 on Claude and 15 on Perplexity. That's an 85-point gap on the exact same set of prompts, run in the same week.
This keeps showing up across the May 2026 Cited Index. The top 15 brands by platform disparity show gaps ranging from 55 to 85 points. And 77 of the 202 brands we track have such extreme concentration on a single platform that a single model update could wipe out most of their AI visibility.
A composite "AI visibility score" hides all of this. Each platform has its own source preferences and its own blind spots — and the gaps are large enough to change your strategy.
The 15 Biggest Platform Gaps
We measured mention rate — the percentage of AI responses that include a brand — across ChatGPT, Gemini, Claude, and Perplexity for every brand in the Cited Index. Then we calculated the gap between each brand's highest-scoring and lowest-scoring platform.
| Brand | Category | Best Platform | Worst Platform | Gap |
|---|---|---|---|---|
| Keka HR | HR & Payroll | Claude: 100 | Perplexity: 15 | 85 |
| Dot & Key | Skincare & Beauty | Claude: 88 | ChatGPT: 8 | 80 |
| Mokobara | Travel & Luggage | Gemini: 79.2 | Claude: 0 | 79.2 |
| RazorpayX Payroll | HR & Payroll | Gemini: 70 | Claude: 0 | 70 |
| LeadSquared | CRM & Sales | Gemini: 90 | Perplexity: 25 | 65 |
| sumHR | HR & Payroll | ChatGPT: 65 | Perplexity: 0 | 65 |
| Re'equil | Skincare & Beauty | Gemini: 84 | Perplexity: 20 | 64 |
| Dr. Sheth's | Skincare & Beauty | ChatGPT: 60 | Claude: 0 | 60 |
| Kylas CRM | CRM & Sales | ChatGPT: 60 | Gemini: 0 | 60 |
| Darwinbox | HR & Payroll | Claude: 80 | Perplexity: 20 | 60 |
| Foxtale | Skincare & Beauty | ChatGPT: 64 | Perplexity: 8 | 56 |
| Yellow.ai | Conversational AI | ChatGPT: 100 | Perplexity: 44 | 56 |
| Freshsales | CRM & Sales | ChatGPT: 90 | Perplexity: 35 | 55 |
| HubSpot CRM | CRM & Sales | ChatGPT: 55 | Gemini: 0 | 55 |
| Zoho Payroll | HR & Payroll | ChatGPT: 85 | Perplexity: 30 | 55 |
Keka HR is the #1 HR brand on Claude — and barely registers on Perplexity. Mokobara dominates Gemini — and literally scores zero on Claude. The "visibility story" for every brand in this table changes depending on which platform you check.
Per-Platform Bias Patterns
The bias isn't random. Each platform shows consistent tendencies across the full 202-brand dataset.
| Platform | Avg Score | Std Dev | Top Brand |
|---|---|---|---|
| ChatGPT | 20.7 | 24.7 | Zoho CRM (100) |
| Gemini | 18.7 | 23.4 | Minimalist (96) |
| Claude | 17.2 | 22.8 | Keka HR (100) |
| Perplexity | 14.0 | 17.8 | Zoho CRM (85) |
Perplexity is the hardest platform. Average score of 14.0 — significantly lower than ChatGPT's 20.7. Perplexity's top score across all 202 brands is just 85 (Zoho CRM). ChatGPT, Gemini, and Claude all have brands scoring 96 or higher. If your brand has decent visibility on Perplexity, that's a strong signal. If you're invisible on Perplexity but visible elsewhere, that's the norm — not a crisis.
ChatGPT is the most generous. Highest average score (20.7), highest standard deviation (24.7), and the only platform where multiple brands score 100 across different categories. ChatGPT's training data is the broadest, and its web search gives it access to the widest range of sources. A strong ChatGPT score alone doesn't mean much — the bar is lower there.
So a brand that scores 60 on ChatGPT and 20 on Perplexity isn't necessarily underperforming on Perplexity. 20 on Perplexity might be above average. You can't compare raw scores across platforms without knowing each platform's baseline.
Why This Happens
Want to know how your brand scores on these same metrics?
We'll run 20 prompts across 3 AI platforms and send your report within 24 hours.
Four different models, trained on different data, using different retrieval systems, making independent decisions about what to recommend. The disagreement is structural.
Training data differences. Each model was trained on a different corpus. Claude's training data emphasises authoritative, well-structured content. Gemini has the deepest integration with Google's web index. Perplexity prioritises real-time web search results. ChatGPT has the broadest training set. A brand with strong Wikipedia and review site presence might score high on Claude. A brand with recent, well-indexed web content might score high on Perplexity.
Source weighting. When you ask "best HR software in India," each platform weighs different source types. Our analysis of AI citation sources showed that only 4% of citations come from brand websites — 96% come from third-party sources. But which third-party sources matter varies by platform. LinkedIn content carries weight on ChatGPT. Technical documentation and structured content perform on Claude. Review aggregators and comparison articles drive Perplexity results.
Retrieval architecture. Perplexity is search-first — it retrieves and synthesises live web results. Claude (in the Cited Index) is tested without search grounding — pure model knowledge. ChatGPT and Gemini sit between these extremes. These architectural differences mean the same prompt returns fundamentally different citation pathways.
Keka HR has strong authority content and structured technical documentation — exactly what Claude's model knowledge favours. Perplexity, running live web searches, surfaces a different set of sources entirely. The 85-point gap between them isn't surprising once you understand the plumbing.
The Concentration Risk Problem
Platform bias creates a second problem: concentration risk. When most of a brand's AI visibility sits on a single platform, the next model update becomes a business risk.
We measured concentration risk using the coefficient of variation (CV) across each brand's four platform scores. A CV above 0.8 means extreme concentration — the brand is essentially dependent on one platform.
77 brands in the Cited Index have a CV above 0.8. That's 38% of the entire dataset.
The most extreme cases:
| Brand | Category | Dependent On | CV | Platform Scores |
|---|---|---|---|---|
| Carlton | Travel & Luggage | ChatGPT | 1.31 | ChatGPT: 33, Gemini: 0, Claude: 8, Perplexity: 0 |
| Zimyo | HR & Payroll | Perplexity | 1.31 | ChatGPT: 5, Gemini: 0, Claude: 0, Perplexity: 20 |
| Mokobara | Travel & Luggage | Gemini | 1.08 | ChatGPT: 33, Gemini: 79, Claude: 0, Perplexity: 4 |
| Foxtale | Skincare & Beauty | ChatGPT | 0.92 | ChatGPT: 64, Gemini: 8, Claude: 20, Perplexity: 8 |
| Aristocrat | Travel & Luggage | ChatGPT | 1.17 | ChatGPT: 50, Gemini: 0, Claude: 8, Perplexity: 8 |
Mokobara is a case we've tracked closely. In our India D2C Travel Benchmark, Mokobara had the highest overall visibility at 70% — but that number masked a Gemini dependency that persists in the May data. If Google updates Gemini's training data or ranking logic, Mokobara's AI visibility could drop from 79 to near zero overnight. No amount of website optimisation changes that — it's a source authority problem at Layer 3 of the 3-Layer AI Visibility Stack.
Foxtale shows a different pattern. 64% of its visibility comes from ChatGPT. It also blocks all 10 AI crawlers we test in Crawl Radar. So its visibility is entirely third-party-driven AND concentrated on a single platform. That's a double fragility — technical access blocked at Layer 1, visibility dependent on one platform at Layer 3.
What the Bias Means for Your Strategy
What to do with this.
1. Stop treating "AI visibility" as a single number. Your Cited Index score is an average across four platforms. A score of 50 could mean 50 across all four — or 95 on ChatGPT and 5 on everything else. The first brand is resilient. The second is one update away from collapse. Look at the per-platform breakdown, not the composite.
2. Diagnose per platform before you optimise. If your brand scores 80 on ChatGPT and 10 on Claude, "do more SEO" won't help. You need to understand why Claude specifically doesn't know about you. Claude's model knowledge draws heavily from well-structured, authoritative content — technical docs, whitepapers, and definitive articles. Perplexity draws from live search results — fresh, well-indexed content with strong backlinks. Different platforms reward different content types. Your strategy should reflect that.
3. Concentration risk is a leading indicator. When 38% of brands depend on a single platform for the majority of their AI visibility, model updates become business risks. The brands with the lowest concentration risk in our data — Minimalist, Zoho CRM, American Tourister — are the same ones with the strongest overall scores. They score above 50 on every platform. That pattern holds across the dataset: brands with diversified visibility tend to have higher visibility overall, because the underlying authority is broad enough for multiple models to pick up on it.
How to Check Your Own Platform Breakdown
If you're in the Cited Index, your per-platform breakdown is already public. Look at your scores on ChatGPT, Gemini, Claude, and Perplexity individually. Calculate the gap between your best and worst platform. If it's above 40 points, you have a platform bias problem. If more than 60% of your visibility comes from one platform, you have a concentration risk.
For brands not in the Index, you can run the same prompts manually across all four platforms and compare. The prompt library matters — use non-branded, category-level prompts like "best [your category] in India" rather than branded searches that will always mention you.
A brand invisible on Claude needs different content than a brand invisible on Perplexity. The gap between your best and worst platform tells you where to look. The sources each platform trusts — and where your brand is absent from those sources — tell you what to do about it.
FAQ
Why do AI platforms give different brand recommendations?
Each AI platform — ChatGPT, Gemini, Claude, and Perplexity — was trained on different data, uses different retrieval architectures, and weights different source types. ChatGPT has the broadest training data. Claude emphasises authoritative structured content. Perplexity relies heavily on real-time web search. Gemini integrates with Google's web index. These architectural differences mean the same prompt produces different brand recommendations across platforms.
What is AI platform bias in brand visibility?
AI platform bias refers to the systematic differences in how AI platforms recommend brands. According to Cited's analysis of 202 Indian brands, the top 15 brands by platform disparity show gaps of 55 to 85 points. The largest gap observed is 85 points — Keka HR scores 100 on Claude but only 15 on Perplexity.
How can I check if my brand has platform bias?
Run your category's key prompts across ChatGPT, Gemini, Claude, and Perplexity. Compare your mention rate on each platform. If the gap between your highest and lowest platform exceeds 40 points, or if more than 60% of your visibility comes from a single platform, you have a platform concentration risk. The Cited Index provides this breakdown for 202 Indian brands at getcited.in/cited-index.
Which AI platform is hardest for brands to score on?
Perplexity is consistently the hardest platform, with an average brand score of 14.0 across 202 brands — compared to ChatGPT's 20.7. Perplexity's top brand score is 85, while other platforms have brands scoring 95-100. Perplexity's reliance on real-time search results means it requires both strong web presence and fresh content.
What is concentration risk in AI visibility?
Concentration risk measures how dependent a brand's AI visibility is on a single platform. It's calculated using the coefficient of variation across platform scores. A CV above 0.8 indicates extreme concentration — the brand would lose most of its AI visibility if that one platform changed its model or ranking logic. 38% of brands in the Cited Index have extreme concentration risk.