Robots.txt generator tool - Index your website easily

Default - All Robots are
Crawl - Delay
Sitemap: (leave blank if you don't have)
Search Robots

Google

Google Mobile

Google Image

MSN Search

Yahoo

Yahoo MM

Yahoo Blogs

Ask/Teoma

GigaBlast

DMOZ Checker

Nutch

Alexa/Wayback

Baidu

Naver

MSN PicSearch

Restricted Directories
The path is relative to root and must contain a trailing slash "/"

Your Generated Robots.txt File

What is the robot.txt file?

A bot is a computer program that interacts with websites and applications automatically. Robots is a file on a website that tells search engine crawlers, which portions of the site should not be accessed by a bot program. These bots "crawl" web pages and index the content so that it can be found in search engine results. A robots.txt file assists web crawlers in organizing their activities so that they do not overload the webserver or index pages which are not required to show in search results.

Robots.txt is a plaintext file that contains web crawler-specific instructions and language. It is typically followed by all search engines, even though it is not formally standard.

The "language" of these files is believed to be the Robots txt syntax. Any plain text editor may be used to create a new robots text file. A robots.txt file will typically contain the following five terms:

User-agent - The unique bot on each website to which you are sending crawling instructions (usually a search engine).

Disallow - This command instructs a user-agent not to crawl a certain URL. For each URL, just one "Disallow:" line is permitted.

Allow - (only applies to the Google bot or Googlebot): This directive tells the Google bot that it can enter a page or sub-folder even if its main page or sub-folder is not permitted.

Crawl-delay - Specifies the number of seconds a bot should wait before loading and crawling the page's content. (Googlebot does not understand this directive; however, the crawl frequency may be set in the Google Search Console.)

Sitemap - It is used to specify the location of the XML sitemap that is related to this URL.

How does the Robot.txt file help in SEO?

  • Stop bot from crawling site If you don't want search engines to crawl your internal search results pages.
  • If you do not want search engines to index specific sections of your website or the entire domain.
  • If you do not want search engines to index particular files or images on your website.
  • If you wish to inform search engines about the location of your sitemap.

How does the Robot.txt file work?

While a robots.txt file can give guidelines for bots, it cannot enforce those recommendations. Before investigating any other pages on a domain, a good bot, such as a web crawler or a news feed bot, will visit the robots.txt file and then follow the instructions and prevent bots from crawling the site. A bad bot will either disregard the robots.txt file or parse it to explore the prohibited URLs.

Why must Robot.txt be required?

While a robots.txt file can give guidelines for bots, it cannot enforce those recommendations. Before investigating any other pages on a domain, a good bot, such as a web crawler or a news feed bot, will visit the robots.txt file and then follow the instructions and prevent bots from crawling the site. A bad bot will either disregard the robots.txt file or parse it to explore the prohibited URLs.

  • Crawl delays should be set so that servers are not overburdened when crawlers load several pieces of material at the same time.
  • Maintain the privacy of entire sections of a website.
  • Stop internal search results pages from appearing on a public SERP.
  • Indicate the location of a site's maps.
  • Prevent search engines from crawling specific files or categories of files on a website.