Should You Use llms.txt? An Honest Pros and Cons Analysis
Should You Use llms.txt? An Honest Pros and Cons Analysis
If you've spent any time in the GEO conversation in 2026, you've heard about llms.txt. It's the proposed file format for telling AI systems which content on your site is most authoritative. The pitch is appealing: write a small markdown file, place it at /llms.txt, and AI engines use it as a guide to your best content.
The reality is messier. The standard exists. The spec is clean. A few notable brands have implemented it. And the major AI crawlers, the ones that actually drive citations, aren't reading it yet. Here's an honest pros-and-cons breakdown based on what the data says, not what the hype says.
The case for llms.txt
The argument for implementing llms.txt rests on three reasonable points.
1. The spec is clean and the cost is low. Generating an llms.txt file is a 30-minute job for most sites, and the file itself is a few kilobytes of markdown. There's almost no implementation cost beyond the time it takes to think through which pages are your most authoritative ones. If the standard takes off, you're already prepared. If it doesn't, you've spent 30 minutes.
2. Serious players are publishing them. Anthropic and Zapier have implemented llms.txt files on their own properties. That's a non-trivial signal. Anthropic's adoption in particular suggests that at least one major AI company is open to the format, even if its own ClaudeBot isn't actively crawling other sites' files yet.
3. Flattened content is genuinely useful for analysis. One Search Engine Land piece on the proposed standard makes a point that's easy to overlook: having "the entirety of your website content in a file can allow for different types of analysis that were not as easy to render previously." Even if AI crawlers ignore the file, having a curated, machine-readable summary of your most authoritative pages is a useful artifact. Internal teams, partners, and analytics tools can all use it.
The case against llms.txt
The argument against is more direct, and it comes from people doing the actual measurement work.
1. The major AI crawlers aren't using it. Semrush ran a controlled test on Search Engine Land's site and reported zero visits from Google-Extended, GPTBot, PerplexityBot, or ClaudeBot to the llms.txt file over a three-month window. Google's John Mueller confirmed it directly: "FWIW no AI system currently uses llms.txt." When the engineer leading Google's search developer relations says the file isn't being used, that's as clear as it gets.
2. There's no correlation with visibility. Semrush's broader analysis found no correlation between llms.txt implementation and improved AI search visibility. Sites with the file don't outperform sites without it on any measurable AI citation metric. Whatever benefit the file might provide is, at this point, undetectable in real-world data.
3. Adoption is currently a tiny fraction of the web. As of mid-2025, only about 951 domains had published an llms.txt file. That's a rounding error on the global web. Without critical mass on the publisher side, AI companies have little incentive to support it, and without the crawlers implementing, publishers have little incentive to publish. Classic chicken-and-egg.
4. Industry skeptics have a real point. Some marketers argue the entire premise, that LLMs need a separate standard from search engines, is wrong. Brett Tabke, CEO of Pubcon, has argued that "the dividing line between a search engine and an LLM is barely arguable anymore." If you accept that framing, llms.txt becomes a redundant solution to a problem that existing standards (sitemap.xml, robots.txt, schema.org) already address.
What the most cautious analysis recommends
Semrush's bottom line is unusually direct for an SEO publication: "Using llms.txt is probably not worth your time right now, unless you're just curious and want to experiment." Wait for actual adoption signals before dedicating resources to creating and maintaining the file.
That's reasonable. It's also probably right for most teams in 2026. If you have limited GEO budget, and most teams do, there are higher-leverage investments available instead:
- Improve content structure on your top 50 pages with answer-first writing
- Add or fix Organization, Person, Article, and FAQPage schema across your site
- Refresh stale content with current data and updated dates
- Build out missing comparison and definition pages in your category
- Earn citations from third-party "best of" lists and industry publications
Each of these has measurable, near-term impact on AI citations. llms.txt currently has none.
The asymmetric upside argument
That said, there's a counter-argument worth taking seriously. The case for implementing llms.txt isn't "it will boost your AI visibility today." It's "the cost is so low and the potential upside is so asymmetric that it's worth doing as a hedge."
30 minutes of work. A few kilobytes. Zero ongoing maintenance if you set it up right. In exchange, you get:
- Coverage if Anthropic, OpenAI, or Google quietly turns on llms.txt support
- Early-mover positioning in any agentic AI tool that does check the file
- A useful internal artifact that summarizes your most authoritative content
For a team that's already done the higher-leverage GEO work and is looking for low-cost incremental hedges, llms.txt is a defensible bet. For a team that hasn't done the basics yet, it's a distraction.
How to implement it cheaply if you do
If you decide to ship an llms.txt file, do it the cheapest possible way:
- Generate it from your existing sitemap or content index. Don't write it by hand. Pull your top 30-50 most important URLs from your CMS, format them into the markdown structure, and serve the result as a static file at /llms.txt.
- Include the four required parts: H1 with your site name, optional blockquote summary, optional body text, and one or more H2-delimited section lists with hyperlinks.
- Use the "Optional" section for content that's nice-to-have but not essential, so AI consumers can skip it when context is tight.
- Set up automated regeneration. If your content changes, the llms.txt should change with it. Don't let it go stale.
- Don't oversell it internally. It isn't a major win, and overselling it will erode trust in the rest of your GEO recommendations.
Watch the adoption signal, not the hype
The single most useful thing you can do regarding llms.txt isn't building one or refusing to. It's monitoring whether the major AI crawlers start fetching the file in your server logs. If you see GPTBot, ClaudeBot, or PerplexityBot hitting /llms.txt on your site, that's the signal the standard has crossed from "proposed" to "actually used." When that happens, prioritize the file. Until then, deprioritize it.
The infrastructure is in place. The economics aren't. The honest answer to "should you use llms.txt?" in 2026: probably not, but check back in 6-12 months, because this could change quickly if any major AI company decides to support it.
Don't let llms.txt crowd out work that actually moves citations today. If you have 30 minutes and a curiosity itch, ship the file. Otherwise, file it under "promising but not yet load-bearing" and focus your GEO budget on the things that already work.
Not sure what llms.txt is yet? Start here: What Is llms.txt? The New Standard for AI Crawlers Explained.