Dotbot

Frequently Asked Questions

  • Dotbot collects data for our Link index which is available through Link Explorer and our Links API. Rogerbot crawls your site for Moz Pro Site Crawl; if you have a Moz Pro Campaign setup we'll crawl your site weekly.
  • After being found, newly discovered links have the ability to be populated into our index in about 3 days. It may take longer for us to discover backlinks to your site based on factors like crawlability of the referring pages, as well as the quality of the links and the referring pages. It is a good idea to check to see if we've indexed the referring page on which that link is found. If we haven't indexed the referring page yet, you won't see your link in our index. You can use Link Tracking Lists to help monitor the discovery of your backlinks. Read more about how we index the web.

What's Covered?

In this guide you’ll learn more about our crawler, Dotbot, which is used to crawl for the Moz Link Index that powers the Moz API, Link Explorer, and the Links section of your Campaign . For more information regarding our site audit crawler, Rogerbot, please see our Rogerbot guide.

Quick Links

Moz's Link Index Crawler

Dotbot is Moz's web crawler, it gathers web data for the Moz Link Index. This data we collect through Dotbot is available in the Links section of your Moz Pro campaign, Link Explorer, and the Moz Links API. Dotbot is different from Rogerbot, which is our site audit crawler for Moz Pro Campaigns.

Why Does Moz Crawl The Web?

Some of our tools, like Link Explorer, require us to crawl websites. When this happens, the user-agent, Dotbot, is used to identify our crawler. It's good to keep in mind that you need a Moz Pro account to access most of the information gathered. Members of our free online marketing community have limited access. To see an example of the type of data we collect, enter a URL in the search box for Link Explorer.

How to Block Dotbot From Crawling Your Site

If you don't want Dotbot crawling your site, we always respect the standard Robots Exclusion Protocol (aka robots.txt). If you would like to block Dotbot, all you need to do is add our user-agent string to your robots.txt file.

Block Dotbot From Certain Areas of Your Site

          
User-agent: dotbot

Disallow: /admin/

Disallow: /scripts/

Disallow: /images/
        

Block Dotbot From Any Part of Your Site

          
User-agent: dotbot

Disallow: /
        

Slow Dotbot Down

          
User-agent: dotbot

Crawl-delay: 10
        

When will Dotbot see changes to my robots.txt file?

Dotbot only looks at your robots.txt file the first time it encounters that site during a new index crawl. That means if Dotbot saw it was allowed on the site once, any changes to that permission would not be looked at until the next time we locate links to your site.


Woo! 🎉
Thanks for the feedback.

Got it.
Thanks for the feedback.