Email Address for Queries about the bot ( if you are too busy to read rest of the page ): [email protected] (we respond very quickly!)

You may have reached this page by clicking a link left by StudyLibBot in your log files. Below you can see some of the most Frequently Asked Questions regarding StudyLibBot.

What is StudyLibBot doing on my site(s)?
In all cases visit of this bot is inspired by service user to download selected document.

What happens with crawled data?
Downloaded document will be converted to flash format and used by this user as public or private depending on user defined settings.

Why do you keep crawling this(other) documents multiple times?
There is no way document can be accessed without real person order. Multiple download can be only triggered by user which used service tool multiple times.

Your are crawling links with rel=nofollow
We do not using any type of links on your or any other site. The only way document is downloaded is to get link from real person.

Why did you fuck up author rights from the footer of my site/text of the document/document link description?
We have no ability to check whole internet about copyrights reserved by this document, so it is required to grant access for this before using our service. Access to your document has been granted by person who initiated this download. As an additional we do not allow to download any document blocked by any type of authorisation or authentication. Document should be available to anyone by link. If you are not informed that this document have public availiable link, or you are not authorised this person to give this link to anybody, please conact us immediatly by email [email protected]

How can I block StudyLibBot?
StudyLibBot adheres to the robots.txt standard. If you want the bot to prevent website from being crawled then add the following text to your robots.txt:
User-agent: StudyLibBot
Disallow: /

Please do not waste your time trying to block bot via IP in htaccess - we do not use any consecutive IP blocks so your efforts will be in vain. Also please make sure the bot can actually retrieve robots.txt itself - if it can't then it will assume (this is the industry practice) that its okay to crawl your site.

If you have reason to believe that StudyLibBot did NOT obey your robots.txt commands, then please let us know via email: [email protected]. Please provide URL to your website and log entries showing bot trying to retrieve pages that it was not supposed to.

There are a number of false positives raised - this can be a useful checklist when configuring a web server:

Off site redirects when requesting robots.txt - StudyLibBot follows redirects, but only on the same domain. The ideal is for robots.txt to be available at "/robots.txt" as specified in the standard. Multiple domains running on the same server. Modern webservers such as Apache can log accesses to a number of domains to one file - this can cause confusion when attempting to see what webserver was accessed at which point. You may wish to consider adding domain information to the access log, or splitting access logs on a per domain basis Robots.txt out of sync with developer copy. We have had complaints that StudyLibBot has disobeyed robots.txt - only to find out that the developer was testing against a development server which was not in-sync with the live version

What are the current versions of StudyLibBot?


Current operating versions of StudyLibBot are:

v1.8.x series - most common: v1.8.5 (new as of April 2014) and v1.8.4 (to be phased out before end of May 2014) If you have not been satisfied with the information above then feel free to contact us: [email protected]