Fix: robots.txt Accidentally Blocking All Crawlers
A robots.txt file with Disallow: / for User-agent: * blocks all search engine crawlers from indexing any page on the site. This is one of the most common causes of sites disappearing from Google search results overnight.
The Problem
The Disallow: / directive tells every crawler that no part of the site should be crawled. This is correct behaviour for staging environments but catastrophic for production sites. It is often introduced accidentally by a developer testing staging robots.txt and forgetting to update before deploying to production, or by a CMS that ships with SEO-unfriendly defaults.
The Fix
# Allow all crawlers to index the full site User-agent: * Allow: / # Explicitly allow AI crawlers (optional but recommended) User-agent: GPTBot Allow: / User-agent: ClaudeBot Allow: / User-agent: PerplexityBot Allow: / # Reference your sitemap Sitemap: https://yourdomain.com/sitemap.xml
Replace Disallow: / with Allow: / or simply remove the Disallow line entirely. An empty Disallow: is equivalent to Allow: /. Note: changes to robots.txt take effect when Googlebot next crawls the file — typically within 24–48 hours. Use Google Search Console's URL Inspection tool to request immediate recrawl.
Validate your robots.txt live — fetch any URL and get a corrected file in one click.
Open robots.txt Validator →