What is Meanpathbot?
Meanpathbot is Meanpath’s web crawling bot (sometimes also called a “spider”). Crawling is the process by which Meanpathbot discovers new and updated pages to be added to the Meanpath index.
We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. Meanpathbot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
Meanpathbot’s crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by webmasters. As Meanpathbot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Meanpath index.
What is Meanpath?
Meanpath is a new search engine that allows software developers to access detailed snapshots of millions of websites without having to run their own crawlers. Our clients use the information we gather from your site to help solve problems in these areas:
How Meanpathbot accesses your site
For most sites, Meanpathbot shouldn’t access your site more than once every few seconds on average. However, due to network delays, it’s possible that the rate will appear to be slightly higher over short periods. In general, Meanpathbot should download only one copy of each page at a time. If you see that Meanpathbot is downloading a page multiple times, it’s probably because the crawler was stopped and restarted.
Meanpathbot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they’re indexing in the network. Therefore, your logs may show visits from several machines at Meanpathbot.com, all with the user-agent Meanpathbot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server’s bandwidth.
Blocking Meanpathbot from content on your site
It’s almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your “secret” server to another web server, your “secret” URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Meanpathbot will try to download an incorrect link from your site.
If you want to prevent Meanpathbot from crawling content on your site use robots.txt to block access to files and directories on your server by inserting the following:
Once you’ve created your robots.txt file, there may be a small delay before Meanpathbot discovers your changes. If Meanpathbot is still crawling content you’ve blocked in robots.txt, check that the robots.txt is in the correct location. It must be in the top directory of the server (e.g., www.myhost.com/robots.txt); placing the file in a subdirectory won’t have any effect.
If you just want to prevent the “file not found” error messages in your web server log, you can create an empty file named robots.txt. If you want to prevent Meanpathbot from following any links on a page of your site, you can use the nofollow meta tag. To prevent Meanpathbot from following an individual link, add the rel=”nofollow” attribute to the link itself.
Problems with spammers and other user-agents
The IP addresses used by Meanpathbot change from time to time. The best way to identify accesses by Meanpathbot is to use the user-agent (Meanpathbot). You can verify that a bot accessing your server really is Meanpathbot by using a reverse DNS lookup.
Meanpathbot and all respectable search engine bots will respect the directives in robots.txt, but some bad actors and spammers do not.