Site   Web

July 21, 2009

Controlling Search Engine Spiders for Improved Rankings

social networking

When it comes to getting your website listed at the top of the search engines keyword search rankings, it is essential for you to gain a deeper understanding of the search engine spiders that crawl over your website. After all, it is the spiders that determine the relevance of your website and decide where your site will land in the search engine results page. Therefore, by learning how to control the direction of the spiders, you can be certain your website will rise in rankings.

Gaining Control with the Help of Robots.txt

You may think that gaining control of search engine spiders is an impossible task, but it is actually easier than you might think when you take advantage of a handy little tool called the robots.txt file. With the robots.txt file, you can give the spiders the direction they need to locate the most important pages on your website while preventing them from wasting time on the more obscure pages such as your About Us and Privacy Policy pages. After all, these pages won’t do much to increase your search engine ranking and won’t help your target market find your website, so why should the spiders waste their time exploring these pages when ranking your site?

Another positive aspect to using a robot.txt file is the fact that it prevents the spiders from indexing duplicate pages. This is beneficial because having duplicate content can actually reduce your search engine ranking. So, while you are making changes to your website or working on an area that isn’t fully developed yet, you can instruct the spiders to leave those pages alone until you are ready for them to be crawled. The same is true if you have a blog on your website, as a blog post created in WordPress will show up in the main post page, in an archive page, in a category page and as a tag page. With the help of the robots.txt tool, you can instruct the spiders to look only at the main post page.

With the help of your robot.txt files, you can tell the search engine spiders which pages they should and should not search through and index. It is important to keep in mind, however, that the robots.txt tool is meant to be used to prevent search engine spiders from searching certain pages. Therefore, you will only need to use it on those pages you don’t want the spiders to crawl.

Implementing the Robots.txt Tool

To successfully use the robots.txt tool, you first need to determine which pages you don’t want the spiders to search. Then, slowly begin making the changes to your site. By using the tool on only one or two pages at a time, you will be better capable of identifying mistakes that you may have made during the process.

To make your changes, you will need to add the robots.txt file to the root directory of your domain or to your subdomains. Adding it to your subdirectories will not work. For example, you may add the robots.txt file to a url such as http://domain.com/robots.txt or to http://privacypolicy.domain.com/robots.txt. But, adding it to a subdirectory such as http://www.domain.com/privacypolicy/robots.text will not work. With just one robots.txt file within your root directory, you can manage your entire site. If you have subdomains, however, you will need a robots.txt file for each one that you need to manage. You will also need separate robots.txt files for your secure (https) and nonsecure (http) pages.

Creating a Robots.txt File

Creating a robots.txt file is a relatively simple process, as you only need to name your text file robots.txt within any text editor, such as Textpad, NotePad or Apple TextEdit. Your robots.txt file only needs to contain two lines in order to be effective. If you wanted to stop the spiders from searching the archives of the blog on your site, for example, you would add the following to your robots.txt file:

User-agent: * Disallow: /archives/

The “User-agent” line is used to define which search engine spiders you want to have blocked. By placing the asterisk (*) here, you are instructing all search engine spiders to avoid the specified pages. You can, however, target specific search engine spiders by replacing the asterisk with the following codes:

* Google – Googlebot

* Yahoo – Slurp

* Microsoft – msnbot

* Ask – Teoma

The “Disallow” line specifies which part of the site you want the spiders to ignore. So, if you want the spiders to ignore the categories portion of your blog, for example, you would replace “archives” with “category” and so on. If you wanted to instruct the spiders to ignore multiple sections, you would simply add a new “Disallow” line for each area you want to be ignored. Just as you can name specific areas that you want the spiders to avoid, you can also list specific areas that you want specific spiders to view. For example, while you may want most spiders to avoid a specific area, you may want the MSN mediabot, Google image bot or Google AdWords bot to visit those areas. In this case, you can use the asterisk to instruct all search engines to avoid the area while instructing a specific spider to allow the same area. If you want Google’s Adsense bot to access a folder, for example, you would create the following command:

User-agent: * Disallow: /folder/

User-agent: Mediapartners-Google Allow: /folder/

You can also use your robots.txt files to prevent dynamic URLs from being indexed by the search engine spiders. You can accomplish this with the following template:

User-agent: * Disallow: /*&

With this command, you are instructing the spiders to index only one of the URLs that matches the parameters you have set. For example, if you had the following dynamic URLs:

* /greatcars/details.php?propcode=ANCHORS&SRCH=tr

* /greatcars/details.php?propcode=ANCHORS&vr=1

* /greatcars/details.php?propcode=ANCHORS

Your robots.txt instructions will tell the spiders to only list the third example because it will disallow any URLs that start with a forward slash (/) and contain the & symbol. You can use the same strategy to block any URLs containing a question mark by using the following:

User-agent: * Disallow: /*?

Or, you can block all directories that contain a specific word in the URL. For example, you might create a robots.txt file such as the following:

User-agent: * Disallow: /corvette*/

With this command, any page with a URL containing the word “Corvette” will not be crawled by the spiders. It is important to use caution when using these directives, however, as they will cause the spiders to avoid all pages containing the word you specify. As a result, you may accidentally block pages that you do want to be indexed. If you do want to block all but one or two pages with URLs containing a specific word, you can create a robots.txt file that specifically allows the page you still want to be indexed. In this case, your robots.txt file would look something like this:

User-agent: * Disallow: /corvette*/ Allow: /greatcards/corvettesandvipers/details.html

It is also possible for you to instruct the spiders to avoid an entire folder on your website while still allowing it to access specific pages within that folder. To do this, you would write something like:

User-agent: * Disallow: /category/ Allow: /category/just-this-page.html

It is important to note that the search engine spiders will ignore general directives if you have one that addresses a specific spider. For example, if you create the following robots.txt:

User-agent: * Disallow: /category/

User-agent: Googlebot Disallow: /archives/

The Google spider will still index the category page because you listed a directive that was specific to the Googlebot, which overrides the directive that addresses all search engine spiders. So, if you list a specific spider in your robots.txt, you need to individually list all of the things you want that spider to avoid. In this example, you would have to create the following robots.txt file to get Google to avoid the category and archives sections while telling all other spiders to avoid the category section:

User-agent: * Disallow: /category/

User-agent: Googlebot Disallow: /archives/ Disallow: /category/

If you want the spiders to avoid indexing certain types of files, you will need to use the dollar sign symbol. To instruct the spiders to avoid PDF files, for example, you would use the following:

User-agent: * Disallow: /*.pdf$

You would use the same patter for other types of files that you may want to be avoided by the spiders, such as .gif$, .jpg$ or .jpeg$.

Addressing Other Search Engine Concerns

In addition to blocking certain pages from being indexed by the search engines, there are a number of other concerns you may address with the robots.txt tool. For example, if the search engine spiders are downloading your pages too quickly and causing difficulties with your server, you can add a crawl-delay directive to your file that will tell the spider how long it should wait between downloads. In general, it is best to set this directive low, such as somewhere between 0.5 and 1, and then to increase it later if necessary. This robots.txt file would look something like:

User-agent: * Crawl-delay: 0.5

Google does not follow the crawl delay directive, however, and it generally isn’t necessary to add this directive to your robots.txt file.

Another handy aspect of the robots.txt file is that it can help you create a path to your XML sitemap. By adding a line such as:

Sitemap: http://www.yoursitename.com/sitemap.xml

By using your robots.txt file in this way, you can submit your XML sitemap to search engines without registering with a variety of different Webmaster Tools programs. You can also store your XML sitemap anywhere you like with this tool, which can be helpful if you manage several sites and want to keep all of your XML sitemaps in one place.

Finally, it is important to realize that it is still possible for a search engine to index pages that you have included in your robots.txt file. There are a number of reasons why this may happen. For example, if someone created a link to the page, it will still get crawled through that link. To close this opening, you will need to unblock the page from your robots.txt file and then place a meta noindex tag on the page before you put the page back on your robots.txt file


Eric Johnson – ls visit us for any kind of SEO help

10 Responses to “Controlling Search Engine Spiders for Improved Rankings

    avatar TuVinhSoft says:

    Hi Eric,

    This is truly useful article for all who are doing SEO ans SEM.

    Thanks for sharing.

    avatar Joy Antony says:

    Great post, very useful, Thanks.

    avatar Randall Drake says:

    Thanks for the helpful info. I am just getting my site up and running and this has given me a better understanding for all the SEO that I need to do for my site.

    Thanks for sharing this informative post for us bloggers. I will try to create one.

    avatar Controlling Search Engine Spiders for Improved Rankings | Blogger Earn says:

    […] http://www.sitepronews.com/2009/07/21/controlling-search-engine-spiders… […]

    avatar Buzzed says:

    Very informative post. Helpful and already making use of what I learned.

    THX

    avatar William says:

    Your article is about 2 years out of date.

    avatar Controlling Search Engine Spiders for Improved Rankings | Easy Street Interactive Blog says:

    […] Post from: SiteProNews: Webmaster News & Resources%%Controlling Search Engine Spiders for Improved Rankings%% […]

    Very Useful Article i resolve my issue after using this information

    avatar Jeff says:

    Thank you for the information. Some of it is very interesting and it’s good to see stress put on controlling your Robots.txt. I have to disagree with a couple of things which are very important. 1st Spiders don’t decide relevance, they crawl and the pages are indexed where the ranking then takes place. Also, such pages as the privacy policy and about us are now vital to have indexed as Google sees them as aspects which help a site to be deemed trustworthy. Trustworthiness, especially on ecomms, is a strong Google signal. it will be good to get any other such aspects going on the site which will increase user confidence. Cheers

Submit a Comment

Your email address will not be published. Required fields are marked *






You may use these HTML tags and attributes: <a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Please leave these two fields as-is:

Protected by Invisible Defender. Showed 403 to 3,862,100 bad guys.

css.php