2026-03-31 · robots.txtSEOGoogleCrawling

Why Google Ignores Crawl-Delay in robots.txt (And What to Use Instead)

You added Crawl-delay: 10 to your robots.txt. Googlebot is still hammering your server. This is not a bug — Google has never supported Crawl-delay and has no plans to. Here's what actually works.

The Crawl-delay directive has been in the robots.txt specification for decades. Bing respects it. Yandex respects it. DuckDuckGo respects it. Google does not, and has been public about this since at least 2008.

The reason is philosophical as much as technical. Google's position is that Crawl-delay is a blunt instrument — it applies the same delay to every resource on your site regardless of how expensive each one is to serve. A 10-second delay on a lightweight HTML page is wasteful. The same delay on a page that triggers a heavy database query might not be enough. Google prefers to manage crawl rate based on actual server response times rather than a static hint.

What Google uses instead

Googlebot manages its crawl rate through two mechanisms:

Automatic throttling based on response times. Googlebot watches how long your server takes to respond. If responses are slow, it backs off. If they're fast, it crawls more aggressively. This is dynamic and continuous — it adjusts throughout a crawl session, not just at the start.

Google Search Console crawl rate setting. This is the only way to explicitly tell Google to crawl your site more slowly. It's a manual control in GSC that limits the maximum rate Googlebot uses.

# How to access it:
# Google Search Console → Settings → Crawl rate
# Set to "Limit Google's maximum crawl rate"
# Adjust the slider

Note: The GSC crawl rate setting is only available for root domains, not subdomains or specific paths. And it's advisory — Google may still crawl at a higher rate if it thinks your site can handle it.

When does crawl-delay actually matter

If Googlebot is causing server load issues, the crawl rate control in GSC is your lever. But before you reach for it, check whether Googlebot is actually the problem.

# Check your access logs for Googlebot activity:
grep -i googlebot /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head

# Check the rate — requests per minute:
grep -i googlebot /var/log/nginx/access.log | awk '{print $4}' | cut -d: -f2 | sort | uniq -c

Googlebot is usually not the cause of server load issues. More commonly it's your own cron jobs, a slow database query, or a traffic spike from actual users. Check the logs before blaming the crawler.

What crawl-delay is still useful for

Even though Google ignores it, Crawl-delay is worth keeping if you have other crawlers visiting your site. Bing, DuckDuckGo, and many smaller crawlers do respect it. If you're running a low-resource server and want to limit any crawler that's not Googlebot, a reasonable Crawl-delay value helps.

# A robots.txt that limits non-Google crawlers:
User-agent: *
Crawl-delay: 2

# Googlebot ignores the above, but you can address it directly:
User-agent: Googlebot
# No crawl-delay here — use GSC instead
Allow: /

User-agent: Bingbot
Crawl-delay: 3

A Crawl-delay of 1-3 seconds is a reasonable default for non-Googlebot crawlers. Values over 10 can negatively affect indexing speed for crawlers that do respect it.

The GSC crawl rate control — step by step

If you genuinely need to slow Googlebot down:

  1. Open Google Search Console for your property
  2. Go to Settings (gear icon, bottom left)
  3. Find "Crawl rate" under the Google index section
  4. Select "Limit Google's maximum crawl rate"
  5. Drag the slider to your desired limit
  6. Save — takes effect within a day or two

The effect is gradual. Google won't immediately drop to your specified rate but will trend toward it over the following days. Monitor your server load and adjust.

Your robots.txt crawl-delay isn't broken

If your robots.txt has Crawl-delay: 10 and Googlebot is still crawling at full speed — nothing is wrong with your file. Google is reading your robots.txt correctly. It's just choosing to ignore that specific directive, as it always has.

Validate your robots.txt and check crawl-delay settings, AI bot coverage, and sitemap references.

Open robots.txt Validator →

Related guides