← All articles
GEO Explainer· 8 min read

Why We Don't Chase Prompt Volume (And What We Track Instead)

By Salman Shaikh, Cited

Every week, someone asks us the same question:

"How do I know which AI prompts to focus on? Can you tell me the volume?"

It's a fair question. If you've spent any time in SEO, it's the obvious first question. You open Ahrefs, sort by volume, and start there. It's how the entire search industry has worked for 20 years — but as we've covered in our GEO vs SEO breakdown, the playbook is fundamentally different.

But GEO doesn't work that way. And if you're building your AI visibility strategy around "prompt volume," you're building on sand.

Not from ChatGPT. Not from Perplexity. Not from Gemini. Not from Claude. None of these platforms expose prompt frequency data. What some GEO tools call "prompt volume" is a modeled estimate — not a measurement. And the gap between those two things is enormous.

NP Digital's SVP of Earned Media, Nikki Lam, published a piece on this recently and put it plainly: what platforms sell as prompt volume is modeled, estimated, and often directionally wrong. We agree. But we'd go further — the problem isn't just data quality. The problem is that volume is the wrong lens entirely.

Here's why.

AI Prompts Aren't Keywords — They're Conversations

In Google, 10,000 people might search "best laptop bag for travel." That query has volume because it's repeated identically.

In ChatGPT, the same intent sounds like:

  • "I need a durable bag for my MacBook for a 2-week Europe trip"
  • "What's a good laptop bag that doesn't look corporate?"
  • "Compare laptop bags under ₹15,000 for frequent flyers"

Same intent. Dozens of phrasings. No single "prompt" to attach volume to.

This isn't a data collection problem. It's a structural difference in how people interact with AI versus a search box. AI search is conversational by nature, and conversations don't repeat the same way keyword searches do.

The Results Are Non-Deterministic — By Design

Want to know how your brand scores on these same metrics?

We'll run 20 prompts across 3 AI platforms and send your report within 24 hours.

Get a Free AI Visibility Report →

Even if you could measure exact prompt frequency, the responses you'd get are unstable.

A January 2026 study by SparkToro and Gumshoe.ai tested 2,961 prompts across 600 real users on ChatGPT, Claude, and Google AI. Their finding: there is less than a 1-in-100 chance of getting the same brand list in any two responses, and less than 1-in-1,000 chance of getting the same list in the same order.

Read that again. The same question, asked twice, almost never produces the same answer.

Rand Fishkin's conclusion was blunt: any tool that gives you a "ranking position in AI" is "full of baloney."

This has massive implications. If the outputs are this variable, then a single snapshot of "are we mentioned for this prompt?" tells you almost nothing. You need repeated measurement to establish whether your visibility is consistent or just a coin flip.

Citation Drift Makes Point-in-Time Data Unreliable

Profound, the best-funded GEO platform in the market ($155M raised, valued at $1B after its February 2026 Series C), measured citation drift across AI platforms. Their volatility research found that 40-60% of cited sources change month over month, and 70-90% drift over six months — even for identical prompts. A brand that shows up in March might vanish in April with no change to their content or strategy.

This means any "prompt volume" data you capture today may look completely different next month. Building a content calendar around it is like planning a road trip using a map that redraws itself every 30 days.

Panel-Based and API-Based Data Both Have Blind Spots

Some platforms rely on opt-in consumer panels to source prompt data. While the scale sounds impressive — hundreds of millions of prompts per month — the opt-in nature means the sample skews toward tech-savvy, engaged users. Not a representative cross-section of how the general population actually uses AI tools.

Other tools send prompts to AI models via API to simulate real usage. But API results may differ from what users see in the actual ChatGPT or Perplexity interface. The API-focused approach means results don't always align with real human behaviour.

Neither method is "wrong." But neither gives you the reliable, repeatable signal that Google Search Console gives for traditional search. We're in a fundamentally different measurement paradigm.

Nobody has complete visibility into LLM impact on their business today. The tools that took 15 years to mature for SEO — Semrush, Ahrefs, Moz — don't exist yet for AI search. Any vendor promising complete visibility is overselling. Current tracking data should be treated as directional and useful for decisions, but not definitive.

That's the honest reality. And it's the starting point for building a strategy that actually works.

What We Track Instead at Cited

At Cited, we've built our methodology around signals that hold up in a non-deterministic environment. Here's what we measure and why.

SEO Volume ApproachCited's GEO Approach
What you measureKeyword search volume (exact, repeatable)Prompt intent coverage (conversational, variable)
Data sourceGoogle Search Console, Ahrefs, SemrushMulti-run AI responses across ChatGPT, Gemini, Perplexity
Ranking signalPosition 1-10 on a SERPMention consistency across repeated runs (GEO Confidence Score)
Competitive insightShare of voice by keywordShare of mentions by intent cluster and funnel stage
StabilityDeterministic — same query, same resultsNon-deterministic — same prompt, different results every time
Maturity15+ years of toolingPre-Semrush era — directional, not definitive

1. Mention Consistency, Not Mention Count

We don't just check "did the brand appear?" once. We run prompts multiple times across platforms to measure how consistently a brand shows up. A brand that appears in 8 out of 10 runs has meaningfully different visibility than one that appears in 2 out of 10 — even if a single snapshot would show them both as "mentioned."

This is what we call the GEO Confidence Score — a measure of how reliably your brand appears across repeated runs. It's a stability metric, not a vanity metric. And the SparkToro data validates why this matters — with less than 1% chance of identical results across two runs, single-snapshot measurement is effectively noise.

2. Purchase-Intent Prompt Classification

Not all prompts matter equally. "What skincare ingredients help with hyperpigmentation?" is informational. "Best vitamin C serum for daily use under ₹1,000" is a decision-stage prompt where brand mentions directly influence purchases.

We classify every prompt in our tracking framework by funnel stage — discovery, comparison, and decision. The prompts you should care about most are the ones where someone is actively choosing between options and your competitors are getting mentioned while you're not.

Prompt TypeExampleWhy It Matters
Discovery"what skincare ingredients help with hyperpigmentation"Educational intent, low conversion — but shapes future preference
Comparison"which type of vitamin C serum works best for oily skin in humid weather"Active evaluation, high influence on brand preference
Decision"best vitamin C serum under ₹1,000 for daily use"Purchase-ready, highest conversion value

3. Competitive Gap Analysis Over Absolute Position

"Ranking #3 in AI" is meaningless. But "your top 3 competitors appear in 70% of decision-stage prompts and you appear in 15%" — that's actionable.

We benchmark brands against their category competitors across every platform and prompt cluster — you can see what this looks like for Indian D2C brands on our Cited Index. The output isn't a vanity "AI ranking." It's a gap map that tells you exactly where you're losing and how fixable each gap is.

4. Intent Coverage, Not Keyword Coverage

Instead of tracking individual prompts, we think in prompt clusters — groups of prompts that circle the same purchase intent from different angles.

For an Indian D2C skincare brand, the cluster might include "best vitamin C serum for Indian skin," "affordable anti-aging routine for humid climate," "which vitamin C concentration works best for dark spots," and "dermatologist-recommended serums for hyperpigmentation." These are all variations of the same underlying intent. If you're visible across 80% of that cluster, you have strong intent coverage. If you're only visible for one phrasing, you have a gap.

This cluster-based approach is more resilient than tracking individual prompts. It doesn't break when someone phrases a question differently, and it maps directly to how AI models reason about brand relevance across a topic.

Start With Your ICP, Not a Dashboard

NP Digital's article makes an excellent point that we fully endorse: start with your Ideal Customer Profile, not a prompt volume dashboard. What are your best customers asking AI? What language do they use? What objections do they raise?

Your sales call recordings, support tickets, customer reviews on Amazon and Nykaa, Reddit threads, and LinkedIn comments — these contain the exact phrasing real buyers use when they're stuck, skeptical, or evaluating options. That language belongs in your content and ultimately in AI answers.

If your D2C brand's customer support team hears "does this work for sensitive skin?" every week, there's a strong chance someone is asking an AI tool the same question. That's a far more reliable content brief than a prompt volume number from a vendor-curated prompt list.

The Right Role for GEO Tools

None of this means abandoning GEO tracking tools entirely. Used correctly, they're genuinely useful for directional awareness: spotting topic gaps, monitoring whether your brand is appearing in the right conversations, and tracking competitive share of voice over time.

The mistake is using them as a keyword volume substitute and letting estimates drive what you create. Let your ICP, audience research, and real customer conversations tell you what to optimise for. Then use tracking data to pressure-test and monitor — not to decide.

This is exactly how Cited is built. Our prompt banks are designed around category-specific intent patterns — not generic "AI keyword" databases. When we run an audit for an Indian D2C brand, the prompts reflect how real Indian consumers talk, not how a tool thinks they should talk.

The Bottom Line

Prompt volume is an SEO concept duct-taped onto a fundamentally different paradigm. It feels familiar, but it's misleading.

The brands winning in AI search aren't the ones chasing the highest-volume prompts. They're the ones who understand what their customers are actually asking AI, show up consistently in those conversations, and close the gaps where competitors are getting mentioned instead.

That's what GEO strategy should be built on. Not estimated volume. Not AI rankings. Competitive intent coverage, measured with statistical rigour, tracked over time.

The tools to measure AI search perfectly don't exist yet. But the tools to act intelligently on what we can measure? Those are here now.

Want a quick read on your AI-readiness? Run your website through our free GEO Score scanner — it checks 15 signals in under a minute, no login required. And when you're ready for the full picture, get a free GEO audit — we'll run 20 prompts across ChatGPT, Gemini, and Perplexity, and show you where you're visible, where you're not, and what to fix first.

S

Salman Shaikh

Former SEO nerd. Recovering big-tech PM. Currently losing sleep over whether your brand exists in an AI answer — and building tools to find out. Cited is the company. The AI Shelf is the newsletter. The obsession is real.

Free Report

See how your brand appears in AI answers — free

We'll run 20 prompts across 3 AI platforms and send your AI Visibility Report within 24 hours.

Get Your Free Report →