All tools
GSCdaddyGSCdaddy

Robots.txt Generator

Create a robots.txt file to control how search engines and AI bots crawl your website. Use presets or build custom rules.

User-agent: *
Allow: /

Want to find which pages need these fixes?

GSCdaddy scans your Google Search Console data and tells you exactly what to optimize — with AI-powered action plans for every page.

Try GSCdaddy Free

How to use this tool

  1. 1
    Choose a preset or add custom rulesStart with a preset like 'Standard Blog' or 'Block AI Bots', then customize.
  2. 2
    Add your sitemap URLInclude your XML sitemap so search engines can find all your pages.
  3. 3
    Review the generated robots.txtCheck the output to make sure the rules match your intent.
  4. 4
    Copy or download the fileSave it as robots.txt and upload it to the root of your website.

Why use a robots.txt generator?

Frequently asked questions

What is a robots.txt file?

A robots.txt file is a plain text file placed at the root of your website (e.g. yoursite.com/robots.txt) that tells search engine crawlers which pages or sections they are allowed or not allowed to access. It follows the Robots Exclusion Protocol standard.

Does robots.txt block pages from appearing in Google?

No. robots.txt prevents crawling, not indexing. If other sites link to a page you've disallowed in robots.txt, Google may still index it based on external signals. To prevent indexing, use a 'noindex' meta robots tag instead.

Should I block AI bots in my robots.txt?

It depends on whether you want AI companies to use your content for training. Bots like GPTBot (OpenAI), Google-Extended (Gemini training), and CCBot (Common Crawl) can be blocked if you want to opt out of AI training while still allowing regular search crawlers.

Where do I put my robots.txt file?

The robots.txt file must be placed in the root directory of your website so it's accessible at yoursite.com/robots.txt. In Next.js, you can create a robots.ts file in your app directory that generates it automatically.

What is crawl-delay and should I use it?

Crawl-delay tells bots to wait a specified number of seconds between requests. Google ignores this directive (use Google Search Console's crawl rate setting instead), but Bing and other bots respect it. Use it if your server is struggling with crawl load.

Related tools

Related articles