Robots.txt Generator

Generate a valid robots.txt file with crawler-specific rules, common templates, and real-time syntax preview.

Quick Templates

Crawler Rules

Rule 1
/

Warnings

  • No Sitemap directive — add your sitemap URL for faster indexing.

robots.txt Preview

1User-agent: *
2Allow: /

File Information

1

Rules

2

Lines

22B

Size

Understanding Robots.txt

The robots.txt file is the first thing search engine crawlers look for when they visit your website. It tells them which areas of your site they can access and which they should avoid.

In 2026, robots.txt has become even more critical with the rise of AI crawlers like GPTBot (OpenAI), Google-Extended (Gemini), and CCBot (Common Crawl). Our generator includes templates to block these bots while keeping your site accessible to search engines.

Features That Set Us Apart

Per-Bot Configuration

Create separate rules for Googlebot, GPTBot, Bingbot, and more. Most tools only offer a single rule set.

6 Ready Templates

One-click templates for WordPress, E-Commerce, SPA, Block AI, and more — no manual typing needed.

Smart Warnings

Real-time validation catches mistakes like blocking CSS/JS files or missing sitemap directives before you deploy.

Robots.txt Syntax Guide

DirectiveExampleMeaning
User-agentUser-agent: *Applies to all crawlers
DisallowDisallow: /admin/Block access to /admin/
AllowAllow: /public/Explicitly allow access
Crawl-delayCrawl-delay: 10Wait 10s between requests
SitemapSitemap: /sitemap.xmlPoints to your XML sitemap

Frequently Asked Questions

What is a robots.txt file?
It's a plain text file at your site's root that instructs search engine crawlers which pages they can and cannot access, following the Robots Exclusion Protocol.
Where should I put my robots.txt?
Always at the root of your domain: https://yourdomain.com/robots.txt. It won't work in subdirectories.
Does robots.txt prevent indexing?
No. It prevents crawling, not indexing. Google may still index URLs from external links. Use the 'noindex' meta tag to block indexing.
How do I block AI crawlers?
Add User-agent rules for GPTBot, Google-Extended, and CCBot with 'Disallow: /'. Our 'Block AI Crawlers' template does this in one click.
Should I include a Sitemap directive?
Yes! It helps search engines discover your content. Add 'Sitemap: https://yourdomain.com/sitemap.xml' to the end of your file.
What does Crawl-delay do?
It tells bots to wait N seconds between requests to reduce server load. Note: Googlebot ignores this — use Google Search Console instead.