Free Tool

AI Robots.txt Generator

Generate a perfectly configured robots.txt file that allows AI search engines like ChatGPT, Perplexity, Claude, and Gemini to discover and cite your content. Free, instant, no signup.

AI Crawlers

|

Options

Generated robots.txt

User-agent: *
Allow: /

# AI Search Engine Crawlers - Allowed
User-agent: GPTBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: Google-Extended
Allow: /

User-agent: Applebot-Extended
Allow: /

User-agent: Meta-ExternalAgent
Allow: /

User-agent: cohere-ai
Allow: /

User-agent: anthropic-ai
Allow: /

# AI Crawlers - Blocked
User-agent: Bytespider
Disallow: /

User-agent: CCBot
Disallow: /

# GEO guidance files
Allow: /llms.txt
Allow: /llms-full.txt
Allow: /.well-known/ai-plugin.json

How to use

  1. Configure the AI crawlers you want to allow or block
  2. Add your sitemap URL (recommended)
  3. Copy or download the generated file
  4. Upload it to your website root as /robots.txt
  5. Run a free GEO scan to verify your setup

Why Your Robots.txt Matters for AI Search

AI Engines Need Permission

Unlike traditional search engines, most AI crawlers respect robots.txt strictly. If you don't explicitly allow them, they won't index your content — making you invisible in AI-generated answers.

Control Your AI Exposure

Not all AI crawlers are equal. Some are for search (you want these), others are for training data (you might not). This tool lets you choose exactly which AI systems access your content.

40% of Searches Use AI

AI-generated answers now appear in 40% of all searches. If ChatGPT, Perplexity, or Gemini can't crawl your site, you're missing out on a rapidly growing traffic source.

Complete AI Crawler Reference

All known AI search engine crawlers and their user-agent strings. Use this reference to ensure your robots.txt covers every AI bot.

User-AgentCompanyPurposeRecommendation
GPTBotOpenAIChatGPT search & trainingAllow
ChatGPT-UserOpenAIChatGPT browsingAllow
OAI-SearchBotOpenAISearchGPT resultsAllow
ClaudeBotAnthropicClaude AI searchAllow
PerplexityBotPerplexityPerplexity AI answersAllow
Google-ExtendedGoogleGemini AI trainingAllow
Applebot-ExtendedAppleApple IntelligenceAllow
Meta-ExternalAgentMetaMeta AI featuresAllow
cohere-aiCohereCohere AI modelsAllow
BytespiderByteDanceTikTok AI featuresOptional
CCBotCommon CrawlOpen dataset trainingOptional
anthropic-aiAnthropicClaude training dataAllow

Frequently Asked Questions

What is a robots.txt file?

A robots.txt file is a text file placed at the root of your website that tells web crawlers and AI bots which pages they can and cannot access. It is essential for controlling how search engines and AI models index your content.

Why do I need a special robots.txt for AI crawlers?

AI search engines like ChatGPT, Perplexity, and Claude use different crawler user-agents than traditional search engines. Without explicit rules for these AI crawlers, they may not be able to index your site for AI-generated answers.

Which AI crawlers should I allow?

For maximum AI search visibility, you should allow GPTBot (ChatGPT), ChatGPT-User, OAI-SearchBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, Google-Extended (Gemini), Applebot-Extended, Meta-ExternalAgent, and cohere-ai.

Will allowing AI crawlers affect my traditional SEO?

No. AI crawler directives are separate from traditional search engine crawlers like Googlebot. You can configure them independently without affecting your Google, Bing, or other search engine rankings.

Where do I put the robots.txt file?

The robots.txt file must be placed at the root of your domain, accessible at https://yourdomain.com/robots.txt. Most web hosting platforms and CMS systems have a specific location for this file.

Check If AI Crawlers Can Access Your Site

After setting up your robots.txt, run a free GEO scan to verify AI crawlers can reach your content.

Run Free GEO Scan