Is the crawl delay rule ignored by Googlebot? I am getting a warning message in Search Console. — asks Amit from Noida, India.
JOHN MUELLER: Today’s question comes from Noida in India.
Amit asks, is the crawl-delay rule ignored by Googlebot?
I’m getting a warning message in Search Console.
Search Console has a great tool to test robots.txt files, which is where we’d show this warning.
What does it mean?
And what do you need to do?
The crawl-delay directive for robots.txt files was introduced by other search engines in the early days.
The idea was that webmasters could specify how many seconds a crawler would wait between requests to help limit the load on a web server.
That’s not a bad idea overall. However, it turns out that servers are really quite dynamic, and sticking to a single period between requests
doesn’t really make sense.
The value given there is the number of seconds between requests, which is not that useful now that most servers are able to handle so much more traffic per second.
Instead of the crawl-delay directive, we decided to automatically adjust our crawling based on how your server reacts.
So if we see a server error, or we see that the server is getting slower, we’ll back off on our crawling.
Additionally, we have a way of giving us feedback on our crawling directly in Search Console.
So site owners can let us know about their preferred changes in crawling.
With that, if we see this directive in your robots.txt file, we’ll try to let you know that this is something that we don’t support.
Of course, if there are parts of your website that you don’t want to have crawled at all, letting us know about that in the robots.txt file is fine.
Check out the documentation in the links below.
We love sharing these videos with you.
So please like and comment if you’re enjoying them.
And don’t forget that you can subscribe to the channel for the latest video updates.