What does crawl-delay mean in robots.txt?

The crawl-delay directive in a robots.txt file is used to specify the amount of time a search engine crawler or bot should wait between successive requests to your website. This delay helps to control the crawling rate and prevent your website from being overwhelmed by excessive requests from crawlers, which could potentially slow down or crash your server.

crawl delay

The crawl-delay value is specified in seconds and is typically used with a specific user-agent. Here’s an example of how to set a crawl-delay of 10 seconds for Googlebot:

User-agent: Googlebot
Crawl-delay: 10

In this example, the crawl-delay directive tells Googlebot to wait for 10 seconds between requests to your site. Keep in mind that not all search engines interpret or follow the crawl-delay directive, and some may have their own mechanisms for controlling crawl rate.

It’s important to be cautious when setting a crawl-delay, as setting it too high may result in your website being crawled less frequently, which could negatively impact your site’s visibility and indexing in search results. In most cases, you should only use the crawl-delay directive if you’re experiencing server load issues due to search engine crawlers.

  • What does crawl-delay: 10 mean in robots.txt?
  • How to get Google to recrawl my website?

Published on: 2023-03-31
Updated on: 2023-03-31

Avatar for Isaac Adams-Hands

Isaac Adams-Hands

Isaac Adams-Hands is the SEO Director at SEO North, a company that provides Search Engine Optimization services. As an SEO Professional, Isaac has considerable expertise in On-page SEO, Off-page SEO, and Technical SEO, which gives him a leg up against the competition.