Getting Started with GEO

How to Check If Your Brand Appears in ChatGPT

A step-by-step guide to checking whether ChatGPT, Perplexity, and Gemini mention your brand — including the 6 prompt types to test, what to look for, and how to automate it.

You can check whether ChatGPT mentions your brand by running non-branded, shopping-intent prompts and examining the responses for mentions, recommendations, and citations. The key is to use prompts that a real consumer would ask — not prompts that include your brand name (which will always return results about you). This guide covers the manual method, the 6 prompt types to test, what to look for in the responses, and how to scale this across multiple AI platforms.

Most brands have never checked whether AI mentions them at all. The ones that have typically ran one or two prompts in ChatGPT and stopped. That's not enough. AI responses are non-deterministic — the same prompt can produce different results each time. You need a systematic approach.

The Manual Method: 6 Prompt Types to Test

The most important rule: never include your brand name in the prompt. Asking "Is [Brand] a good product?" will always mention your brand. That tells you nothing about organic AI visibility. Instead, use non-branded prompts that mirror how real consumers ask AI for recommendations.

Here are the 6 prompt types to test, with examples for a hypothetical skincare brand:

1. Problem-First Prompts

These start with a problem the consumer has, not a product category.

  • "My skin gets very dry in winter, what should I use?"
  • "I have oily skin and keep getting breakouts, what moisturizer works?"
  • "What helps with dark circles under eyes?"

2. Context-Specific Prompts

These add geographic, demographic, or situational context.

  • "Best moisturizer for dry skin available in India"
  • "Affordable skincare routine for a 25-year-old in Mumbai"
  • "Dermatologist-recommended face wash for sensitive skin in India"

3. Budget-Anchored Prompts

These include price constraints that force AI to make specific recommendations.

  • "Best face serum under ₹1000"
  • "Good moisturizer between ₹500 and ₹1500 in India"
  • "Premium skincare brand worth the price in India"

4. Comparison Prompts

These ask AI to compare brands or products directly.

  • "Compare the top 5 skincare brands in India"
  • "Which is better for sensitive skin — niacinamide or vitamin C serum?"
  • "Best Indian skincare brands vs international ones"

5. Recommendation-Seeking Prompts

These are direct "what should I buy" queries.

  • "Recommend a good night cream for anti-aging"
  • "What's the best sunscreen for Indian skin?"
  • "Top 3 face washes for acne-prone skin"

6. Feature-Curious Prompts

These ask about specific ingredients, features, or attributes.

  • "Which Indian skincare brands use hyaluronic acid?"
  • "Brands that offer fragrance-free moisturizers in India"
  • "Which sunscreen brand has the best SPF 50 formula?"

How to Run the Test

For each prompt type, write 3-4 specific prompts relevant to your product category. That gives you 18-24 prompts total. Then:

  1. Open a new chat session in ChatGPT (don't reuse an existing conversation — context from prior messages skews results)

  2. Paste the prompt and read the full response

  3. Record the result in a spreadsheet with these columns:

    • Platform (ChatGPT, Perplexity, Gemini)
    • Prompt text
    • Was your brand mentioned? (Yes/No)
    • Position (1st, 2nd, 3rd recommendation, etc.)
    • Sentiment (Positive, Neutral, Negative)
    • Was a link to your site included? (Yes/No — relevant for Perplexity)
    • What other brands were mentioned?
  4. Repeat across platforms — run the same prompts on Perplexity and Gemini. Platform variance is massive. We've seen brands mentioned in 70%+ of Perplexity responses but 0% on Gemini.

  5. Run it twice — AI responses are non-deterministic. Run your full prompt set at least twice on different days to see how consistent the results are.

What to Look For in the Responses

Not all mentions are equal. When your brand appears, pay attention to four things:

Mention Type

  • Direct recommendation — "I recommend [Brand]" — the strongest signal
  • List inclusion — Your brand appears in a list of recommendations — good but weaker
  • Passing mention — "Brands like [Brand] also offer..." — minimal impact
  • Citation — Perplexity links directly to your content — very strong for traffic

Position

Where you appear in the response matters. First recommendation carries significantly more weight than fifth. If you're consistently appearing at position 4-5, you're technically "visible" but practically invisible — most users only consider the first 2-3 options.

Sentiment and Framing

AI doesn't just mention brands — it describes them. Is it calling your brand "premium," "affordable," "design-forward," or "overpriced"? These framing tags shape consumer perception before they ever visit your website. A negative framing in position 1 can be worse than not being mentioned at all.

Competitor Presence

Document which competitors appear alongside you (or instead of you). This reveals your share of voice in AI-generated answers and identifies which competitors have stronger AI visibility than you.

Scaling Beyond Manual Checks

Manual checking works for an initial baseline, but it doesn't scale. AI responses change with every model update, training data refresh, and web search result. A brand mentioned today may disappear tomorrow.

For ongoing monitoring, you have two options:

Option 1: Scheduled manual audits. Run your prompt set monthly. Track changes over time. This works for brands just getting started with AI visibility.

Option 2: Automated AI visibility monitoring. Tools like Cited run your prompt library across multiple AI platforms on a scheduled basis, tracking mention rate, position, sentiment, and share of voice automatically. This is what we built Cited to do — eliminate the manual spreadsheet and give you a dashboard that updates weekly.

Cross-Platform Checking Is Non-Negotiable

Testing only ChatGPT gives you an incomplete picture. Each AI platform pulls from different sources and weights different signals:

  • ChatGPT relies heavily on training data and web browsing for some queries
  • Perplexity searches the web for every query and always cites its sources
  • Gemini leverages Google's search index, making Google SEO performance highly correlated
  • Claude tends to be more conservative with recommendations, requiring stronger evidence
  • Grok pulls from X/Twitter data and real-time web sources

A brand that appears in 80% of ChatGPT responses might appear in only 20% of Perplexity responses — or vice versa. Platform-specific gaps are where the biggest opportunities hide.

Common Patterns We See

After running thousands of prompts across hundreds of brands, here are the patterns we see most frequently:

  1. Google-dominant brands underperform on ChatGPT. Brands with strong Google SEO but thin content often rank well on Gemini but poorly on ChatGPT, which relies more on training data and diverse source citations.

  2. D2C brands with Reddit presence overperform. Brands frequently discussed in Reddit threads tend to appear more often across all AI platforms. Reddit is one of the most-cited domains by Perplexity and Google AI Overviews.

  3. Brands with structured comparison pages win. Companies that publish their own "vs" pages and comparison tables give AI platforms extractable data to cite. This is one of the highest-leverage content types for AI visibility.

  4. International brands with India-specific content gaps. Global brands often appear in English-language AI queries but disappear when India-specific context is added ("best moisturizer in India"). Localised content is essential.

Key Takeaways

  • Always use non-branded prompts — including your brand name defeats the purpose
  • Test 6 prompt types: problem-first, context-specific, budget-anchored, comparison, recommendation-seeking, and feature-curious
  • Run at least 18-20 prompts across ChatGPT, Perplexity, and Gemini for a reliable baseline
  • Track mention type, position, sentiment, and competitor presence — not just yes/no
  • Platform variance is massive — a brand visible on Perplexity can be invisible on ChatGPT
  • Run tests at least twice on different days, since AI responses are non-deterministic

Frequently Asked Questions

Why does ChatGPT sometimes mention my brand and sometimes not?+
AI responses are non-deterministic — the same prompt can produce different answers each time. Temperature settings, model updates, and real-time web search results all introduce variability. This is why single manual checks are unreliable. You need to run the same prompt multiple times across sessions to get a reliable signal.
Should I check ChatGPT or Perplexity first?+
Start with ChatGPT (largest user base) and Perplexity (fastest-growing, most search-dependent). If your brand appears on neither, you have a visibility gap. If it appears on one but not the other, you have a platform-specific content gap that's worth investigating.
How many prompts do I need to test?+
For a reliable baseline, test at least 15-20 non-branded prompts across the 6 prompt types. Fewer than 10 gives you anecdotal data, not a pattern. Cited's free audit runs 20 prompts across 3 platforms as a starting benchmark.
My brand appears but with incorrect information. What should I do?+
This is actually common and important to catch early. AI can associate your brand with wrong product categories, outdated pricing, or incorrect features. Document every instance of misinformation — this is your correction priority list. The fix usually involves updating your website content to explicitly state the correct information in a structured, extractable format.

Check how AI-ready your website is

Free GEO Score →