Fix: robots.txt Accidentally Blocking All Crawlers

A robots.txt file with Disallow: / for User-agent: * blocks all search engine crawlers from indexing any page on the site. This is one of the most common causes of sites disappearing from Google search results overnight.

The Problem

The Disallow: / directive tells every crawler that no part of the site should be crawled. This is correct behaviour for staging environments but catastrophic for production sites. It is often introduced accidentally by a developer testing staging robots.txt and forgetting to update before deploying to production, or by a CMS that ships with SEO-unfriendly defaults.

The Fix

CORRECTED robots.txt
# Allow all crawlers to index the full site
User-agent: *
Allow: /

# Explicitly allow AI crawlers (optional but recommended)
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: PerplexityBot
Allow: /

# Reference your sitemap
Sitemap: https://yourdomain.com/sitemap.xml

Replace Disallow: / with Allow: / or simply remove the Disallow line entirely. An empty Disallow: is equivalent to Allow: /. Note: changes to robots.txt take effect when Googlebot next crawls the file — typically within 24–48 hours. Use Google Search Console's URL Inspection tool to request immediate recrawl.

Validate your robots.txt live — fetch any URL and get a corrected file in one click.

Open robots.txt Validator →

Frequently Asked Questions

How long does it take for Google to re-index after fixing robots.txt?
Google typically re-crawls robots.txt within 24–48 hours of a change. For faster recrawling, use Google Search Console's URL Inspection tool on the homepage and request indexing. Previously indexed pages should reappear in search results within 1–2 weeks.
Why does my site disappear from Google after a deployment?
The most common cause is a staging robots.txt (with Disallow: /) being deployed to production. CMS platforms like WordPress also sometimes reset robots.txt settings after updates. Check your robots.txt immediately after any deployment.
Does Disallow: / affect all search engines?
Yes. Disallow: / under User-agent: * applies to all crawlers that respect the robots exclusion protocol — Google, Bing, DuckDuckGo, Yahoo, and most others. AI crawlers like GPTBot also respect it.

Related Guides