← All articles
GEO Explainer· 8 min read

If llms.txt Doesn't Drive Citations, Why Did Anthropic, Cloudflare, and Stripe Implement It?

By Salman Shaikh, Cited

Trakkr just published the most comprehensive study on llms.txt and AI citations to date. Their conclusion: zero correlation. None. A p-value of 0.85 across 337,000+ citations — statistically indistinguishable from noise.

The same week, Mintlify rolled out automatic llms.txt generation for every documentation site on its platform. Thousands of sites, overnight. Anthropic — the company that built Claude — already has both llms.txt and llms-full.txt on its developer docs. Cloudflare's llms.txt runs thousands of lines covering 20+ products. Stripe, Vercel, Supabase, Zapier, Cursor, Windsurf — all implemented.

So either these engineering teams didn't do their homework, or Trakkr is measuring something different from what these companies are optimizing for.

Both sides are right. They're just answering different questions.

What Trakkr actually found

Let's give the study its due. It's solid research.

Trakkr analyzed 337,000+ AI citations across major platforms. They compared sites that have an llms.txt file against sites that don't. The results were unambiguous:

  • Median citations: 3.0 for both groups. With or without llms.txt, the median citation count was identical.
  • p-value: 0.85. In statistics, anything above 0.05 means no significant difference. 0.85 is about as "no effect" as it gets.
  • Only 6% of the top 50 most-cited sites have llms.txt. Forbes, Reuters, LinkedIn, Wikipedia — the sites AI models cite most — don't have the file.
  • The top 100 most-cited domains are dominated by brand authority, not technical optimization. News sites, government portals, Wikipedia. None of them need a file to tell AI what they are — the models already know.

The study's conclusion is reasonable: if your goal is to increase citation count, adding an llms.txt file won't move the needle. The evidence supports that claim.

But here's what the study doesn't explain: why are some of the most technically sophisticated companies on the internet bothering with a file that has "zero impact"?

Who has llms.txt — and what they're actually doing with it

The adopter list tells a story that citation counts miss entirely.

Anthropic — the company behind Claude — serves llms.txt and llms-full.txt at docs.anthropic.com. This is the company that builds one of the most prominent AI consumers of web content. They're not implementing llms.txt to get more citations. They're implementing it because they understand, better than anyone, how AI models consume documentation. They're making their own docs easier for their own models to parse.

Cloudflare has one of the most extensive llms.txt files in existence. Their developer documentation at developers.cloudflare.com/llms.txt covers Workers, R2, D1, Pages, and 20+ other products — thousands of lines of structured content. Cloudflare processes a meaningful share of global web traffic. When they adopt a standard, it's a signal about where the infrastructure layer is headed.

Vercel reported going from less than 1% to 10% of new signups coming through ChatGPT conversations in roughly six months. Their llms.txt isn't an experiment — it's part of a deliberate strategy to make their documentation the default answer when developers ask AI assistants about deployment, serverless functions, or Next.js hosting.

Stripe implemented llms.txt for their API documentation. When a developer asks ChatGPT "how do I set up Stripe payments in Python," Stripe wants the model to reference the correct, current API endpoints — not a three-year-old Stack Overflow answer.

Zapier structured their llms.txt around API endpoints and automation workflows — the exact content developers ask AI assistants about when building integrations.

Mintlify went even further. They don't just have llms.txt on their own site — they auto-generate it for every documentation site hosted on their platform. Thousands of developer documentation sites got llms.txt overnight. This is a platform-level bet that structured AI discoverability is infrastructure, not a marketing tactic.

Do you see the pattern? Every major adopter is a developer tools company. They're not optimizing for citation count. They're optimizing for developer experience inside AI coding assistants.

The resolution: both sides are measuring different things

Want to know how your brand scores on these same metrics?

We'll run 20 prompts across 3 AI platforms and send your report within 24 hours.

Get a Free AI Visibility Report →

Trakkr measured whether llms.txt causes more citations. It doesn't.

But the companies implementing llms.txt aren't chasing citation volume. They're solving a different problem: accuracy and context when AI models reference their products.

There's a crucial difference between:

  • "Did ChatGPT mention Stripe?" (citation count — what Trakkr measured)
  • "Did ChatGPT recommend the correct Stripe API endpoint for the developer's use case?" (citation quality — what Stripe cares about)

Citation count is a blunt instrument. It tells you whether your brand appeared in an AI response. It doesn't tell you whether the model understood your product, recommended the right page, or gave the user accurate information.

The companies implementing llms.txt already get cited. Stripe doesn't need a file to make ChatGPT mention Stripe — it's one of the most recognized brands in payments. What Stripe needs is for the model to understand the difference between the Payment Intents API and the legacy Charges API. That's what llms.txt solves.

Our own data supports this framing. We ran a 48-brand scan comparing sites with and without llms.txt. The brands with llms.txt averaged a GEO Score of 79, while those without averaged 55 — a 24-point gap on GEO Score, our AI-readiness metric that measures content structure, schema markup, authority signals, and AI-specific optimization across 15 signals.

Does llms.txt cause higher GEO Scores? No. The file itself is worth maybe 2-3 points. The other 21 points come from the engineering culture that led to implementing llms.txt in the first place. Teams that care enough to create an llms.txt file also tend to have proper schema markup, clean heading hierarchies, FAQ sections with structured data, and robots.txt configured for AI crawlers.

llms.txt is a proxy signal, not a causal lever. It doesn't drive citations. It correlates with the kind of engineering discipline that makes your entire site more AI-accessible.

What this means for Indian brands

Here's where this gets practical. We've audited dozens of Indian brands through Crawl Radar and GEO Score scans. The question every founder asks: "Should I add llms.txt?"

The honest answer depends on what you build.

If you're a documentation-heavy SaaS company — a payments API, a developer platform, an infrastructure tool — llms.txt makes direct sense. Indian SaaS companies like Razorpay, Postman, and Zerodha have API documentation that developers query through AI assistants daily. When someone asks Claude "how do I integrate Razorpay UPI payments in Node.js," you want the model pulling from your current docs, not a two-year-old blog post. Implement llms.txt. Structure it around your API endpoints and integration guides. This is the Stripe playbook, and it applies directly.

If you're a D2C brand — apparel, beauty, consumer electronics — llms.txt alone won't move your citation count. Trakkr's data is clear on this. Mamaearth doesn't need an llms.txt file to get mentioned when someone asks Perplexity about "best vitamin C serums in India." Brand authority, reviews, media coverage, and product quality drive that.

But here's the thing: the process of creating an llms.txt file forces a conversation your team probably hasn't had. It makes you ask: What are the 10 most important pages on our site? Is our product content structured for extraction, or is it marketing fluff? Do we have FAQ sections that map to the questions customers actually ask AI? Is our robots.txt even allowing AI crawlers in?

We wrote a complete guide to implementing llms.txt with templates and our own implementation as a case study. The guide takes about 30 minutes to follow. But the value isn't in the file — it's in the audit that creating the file forces you to do.

From our 48-brand scan, the brands that scored highest on AI-readiness weren't the ones with the fanciest llms.txt files. They were the ones that had done the fundamentals: proper heading hierarchy, FAQ schema, author attribution on content, crawl access for AI bots. The llms.txt was just the visible artifact of deeper structural work.

The bottom line

Trakkr is right: llms.txt doesn't cause citations. The file, by itself, is not going to make ChatGPT mention your brand more often.

Anthropic, Cloudflare, Stripe, and Vercel are also right: llms.txt matters — just not for the reason most people think. These companies are optimizing for AI comprehension, not AI mentions. They want models to understand their products accurately, not just name-drop them.

For most brands, the real question isn't "should I add llms.txt?" It's: "Is my site structured for AI to understand what I actually do?" The llms.txt file is a 30-minute project. The structural work it reveals — content depth, schema markup, heading hierarchy, crawl access — that's the work that moves your AI Visibility Score.

Start with a free GEO Score scan. See where you actually stand. Then decide whether llms.txt is your next move or your tenth.

FAQ

Does llms.txt improve AI citations?

Not directly. Trakkr's analysis of 337K+ citations found zero statistical correlation (p-value 0.85). However, in a 48-brand scan, brands with llms.txt averaged a GEO Score of 79 versus 55 for those without — a 24-point gap driven not by the file itself, but by the engineering culture that also invests in structured content, schema markup, and crawl accessibility.

What is llms.txt and who created the standard?

llms.txt is a Markdown-formatted file served at yourdomain.com/llms.txt that provides AI models with a structured summary of your site. It was proposed by Jeremy Howard, co-founder of fast.ai. See our complete implementation guide for templates and examples.

Which major companies have implemented llms.txt?

Anthropic (docs.anthropic.com), Cloudflare (developers.cloudflare.com), Stripe, Vercel, Supabase, Cursor, Windsurf, and Zapier. Mintlify auto-generates llms.txt for all documentation sites on its platform, adding thousands of sites overnight.

Should Indian brands implement llms.txt?

It depends on your business model. Documentation-heavy SaaS companies (APIs, developer tools) benefit directly — AI coding assistants use llms.txt to understand your product. D2C brands benefit indirectly: the process of creating the file forces a structural audit of your content, heading hierarchy, schema markup, and crawl configuration. The 30-minute investment pays off not because of the file itself, but because of what building it makes you fix.

What is the difference between AI citations and AI-readiness?

AI citations measure how often AI platforms mention or link to your brand. AI-readiness measures whether your website is structured for AI to discover, parse, and cite. High citations can come from pure brand authority (Forbes, Reuters) without a well-structured website. And a well-structured website doesn't guarantee citations without brand authority. The two metrics complement each other — check both with the Cited Index and GEO Score.

S

Salman Shaikh

Former SEO nerd. Recovering big-tech PM. Currently losing sleep over whether your brand exists in an AI answer — and building tools to find out. Cited is the company. The AI Shelf is the newsletter. The obsession is real.

Free Report

See how your brand appears in AI answers — free

We'll run 20 prompts across 3 AI platforms and send your AI Visibility Report within 24 hours.

Get Your Free Report →