Site   Web

October 1, 2009

The Tricky Issue Of Duplicate Content & What Google Says About It

Being a full-time online marketer means you have to keep a close watch on how Google is ranking pages on the web… one very serious concern is the whole issue of duplicate content. More importantly, how does having duplicate content on your own site and on other people’s sites, affect your keyword rankings in Google and the other search engines?

Now, recently it seems that Google is much more open about just how it ranks content. I say “seems” because with Google there are years and years of mistrust when it comes to how they treat content and webmasters. Google’s whole “do as I say” attitude leaves a bitter taste in most webmasters’ mouths. So much so, that many have had more than enough of Google’s attitude and ignore what Google and their pundits say altogether.

This is probably very emotionally fulfilling, but is it the right route or attitude to take? Probably not!

Mainly because, regardless of whether you love or hate Google, there’s no denying they are King of online search and you must play by their rules or leave a lot of serious online revenue on the table. Now, for my major keyword content/pages even a loss of just a few places in the rankings can mean I lose hundreds of dollars in daily commissions, so anything affecting my rankings obviously gets my immediate attention.

So the whole tricky issue of duplicate content has caused me some concern and I have made an ongoing mental note to myself to find out everything I can about it. I am mainly worried about my content being ranked lower because the search engines think it is duplicate content and penalizes it.

My situation is compounded by the fact that I am heavily into article marketing – the same articles are featured on hundreds, some times thousands of sites across the web. Naturally, I am worried these articles will dilute or lower my rankings rather than accomplish their intended purpose of getting higher rankings. I try to vary the anchor text/keyword link in the resource boxes of these articles. I don’t use the same keyword phrase over and over again, as I am nearly 99% positive Google has a “keyword use” quota – repeat the same keyword phrase too often and your highly linked content will be lowered around 50 or 60 places, basically taking it out of the search results. Been there, done that!

I even like submitting unique articles to certain popular sites so only that site has the article, thus eliminating the whole duplicate content issue. This also makes for a great SEO strategy, especially for beginning online marketers, your own site will take some time to get to a PR6 or PR7, but you can place your content and links on high PR7 or PR8 authority sites immediately. This will bring in quality traffic and help your own site get established.

Another way I combat this issue is by using a 301 re-direct so that traffic and pagerank flows to the URL I want ranked. You can also use your Google Webmaster Tool account to show which version of your site you want ranked or featured: with or without the www.

The whole reason for doing any of this has to do with PageRank juice – you want to pass along this ranking juice to the appropriate page or content. This can raise your rankings, especially in Google.

Thankfully, there is the relatively new “canonical tag” you can use to tell the search engines this is the page/content you want featured or ranked. Just add this meta link tag to your content which you want ranked or featured, as in the example given below:

Anyway, this whole duplicate issue has many faces and sides, so I like going directly to Google for my information. Experience has shown me that Google doesn’t always give you the full monty, but for the most part, you can follow what they say. Lately, over the last year or so, Google seems to have made a major policy change and are telling webmasters a lot more information on how they (Google) rank their index.

So if you’re concerned or interested in finding out more about duplicate content and what Google says about it try these helpful links. First one is a very informative video on the subject entitled “Duplicate Content & Multiple Site Issues” which is presented by Greg Grothaus who works for Google.

Another great link is this page from Google Webmasters Support Answers by Matt Cutts. It has a lot of helpful information, including a video on the Canonical Link Element. It’s located here:

In yet another post, Matt Cutts discusses the related issue of content scraping and advises webmasters not to worry about it. This is a slightly different matter, other webmasters and unmentionables may use software to scrape your site and place your content on their site. This has happened to me, countless times, including when my content has been reduced to scrambled nonsense. Cutts says not to worry about this matter as Google can usually tell the original source of the material. In fact, having links in this duplicate content may just help your rankings in Google.

“There are some people who really hate scrapers and try to crack down on them and try to get every single one deleted or kicked off their web host,” says Cutts. “I tend to be the sort of person who doesn’t really worry about it, because the vast, vast, vast majority of the time, it’s going to be you that comes up, not the scraper. If the guy is scraping and scrapes the content that has a link to you, he’s linking to you, so worst case, it won’t hurt, but in some weird cases, it might actually help a little bit.”

As a full time online marketer I am not so easily convinced, I mainly have pressing concerns about my unscrupulous competition using these scrapings and duplicate content to undermine one’s rankings in Google by triggering some keyword spam filter. Whether in fact this actually happens, only Google knows for sure, but it is just another indication, despite the very detailed and helpful information given above, duplicate content and the issues surrounding it, will still present serious concerns for online marketers and webmasters in the future.

The author is a full-time online marketer who has numerous websites. For the latest web marketing tools try: Internet Marketing Tools If you liked the article above, why not try this Free 7 Day Marketing Course here: Marketing Tools Copyright 2009 Titus Hoskins. This article may be freely distributed if this resource box stays attached.

13 Responses to “The Tricky Issue Of Duplicate Content & What Google Says About It

    avatar Zub-Online Directory says:

    Very informative. Thank you

    I have often wondered what it might do to a blog ranking if you import your blogs into sites such as RedGage, since that would seem to duplicate one’s content on the Internet.

    See Demystifying the “duplicate content penalty” at

    ‘Duplicate content. There’s just something about it. We keep writing about it, and people keep asking about it. In particular, I still hear a lot of webmasters worrying about whether they may have a “duplicate content penalty.”

    ‘Let’s put this to bed once and for all, folks: There’s no such thing as a “duplicate content penalty.” At least, not in the way most people mean when they say that.’

    Someone scaped our website and Google dropped us down 20 pages.
    Google doesn’t always source the original content.

    avatar SitePro News « The world in which I live…. says:

    […] Friendly Version of this Article Have an Opinion on Today’s Article? Post Your Comments at Need Content for Your Website – has 1,522,000+ […]

    I think the duplicate content penalty is a myth. I heard that google looks at the date an article is published and any duplicate of that article published afterwards will be penalized… Not true at all!

    avatar Ricard Menor says:

    Mine is a complicated case as catalan spanish SEO consultant running web promoting projects in both idioms, normally websites in Catalonia (focus on

    Barcelona and Girona
    ) come in both and you have to know that despite they are two different idioms there are still some similar/same words, specially if they mix up with english acronyms (like SEO). One of the things I do on SEO is tayloring an index for spanish (es-ES) and other for catalan (ca), sometimes this means different atribute values within tags but “very similar” to “same spelling” words.

    This issue concerned me for long, must admit this was not penalizing my client websites but this changes between June and July’09.
    Now I have a first casualty whose spanish side had poor ranking in (Google) SERPs as catalan index was designed main to meet geograficaly defined targets (province of Barcelona). Guess what!? English versions never treated as duplicates 🙂 Any of you asking about the end of the story? Of course fresh copywriting and re-SEO, issue solved…

    avatar Professor says:

    I am glad to see that you have seperated the duplicate content issue into its two distinct parts: 1) content on your own website, and 2) content on other websites. However, I see that you have made an error in describing how to use the canonical tag: ” …add this meta link tag to your content which you want ranked or featured …”. This is backwards. If you read Matt Cutts’ article that you reference above, he clearly states: “… users can specify a canonical page to search engines by adding a <link> element with the attribute rel=”canonical” to the <head> section of the NON-CANONICAL version of the page …” [ emphasis mine ].

    avatar Alberton says:

    One of my clients sites was scraped and it never affected PR at all. The duplicate site was up for about 4 months and after about the first month got PR too.

    I don’t think it’s really possible for Google to find every bit of duplicate content anyhow.

    Quote: “I try to vary the anchor text/keyword link in the resource boxes of these articles. I don’t use the same keyword phrase over and over again, as I am nearly 99% positive Google has a “keyword use” quota – repeat the same keyword phrase too often and your highly linked content will be lowered around 50 or 60 places, basically taking it out of the search results.”

    I would venture to suggest from my experience that the “keyword use” quota would apply in cases of keyword stuffing – where there is no attempt to include relevant long tail keyword phrases that are semantically linked to the primary keyword.

    I think we have to recognize the “intelligence” of Google’s bots and their capacity to discern, through latent semantic indexing (meaning relationships), whether frequency of use of a keyword constitutes “stuffing” or focus. I don’t think it is just a numbers test.

    Then again, I do vary my anchor text, article titles, etc. when I am trying to rank better for a related keyword.

    avatar Snozzle says:

    I was under the impression that Google only took into account the first time the article is printed. Duplication after this isn’t relative.

    avatar Chris Tucker says:

    Well, I have a WordPress and a Google Blog.
    I was fiddling around with the WordPress Blog, and found the Import Tool.
    So, I imported the entire Blogspot Blog Posts into WordPress.
    The WordPress Blog is an older Blog then the Blogspot Blog, with hundreds of posts, written well before i imported the Blogspot Posts into it.
    Hopefully, Google won’t hurt me, since the content is relevant ?

    I usually wonder if ranking is the most important element in SEO. Honestly I think that first of all a company should look inside his website, eliminating any redirect or metadata issue.
    Trying to understand the google logic is not simple but it’s worth the effort.

Submit a Comment

Your email address will not be published. Required fields are marked *

Please leave these two fields as-is:

Protected by Invisible Defender. Showed 403 to 6,663,037 bad guys.