Site   Web

January 1, 2012

Establishing Ownership of Your Content – The Rules are Changing — A SPN Exclusive Article

spn_exclusive

I was sketching out two marketing plans over the holidays for a couple of new clients and decided it was time to incorporate some of the research data/results I’ve collected during the latter part of 2011. Generally I’d spend more time testing things on my own sites first, but I’m confident enough with the results of basic testing that I’ve decided to put the ideas into live production.

There are two basic interrelated concepts that I’ve been working on – content length, and establishing ownership of new content in a way which minimizes the chance of your content being considered ‘dupe’ and increases your page authority and SERP’s.

The web is all about content, it’s basically one large article directory. The task for a search engine is to provide an efficient indexing system so we can connect with the information we are looking for in the fewest possible steps.

In the old days, when we bought our ‘Encyclopedia Britannica’, we’d flip to the front to find a broad index of content, then flip to the back to try and find a specific piece of content. It was and still is a pain trying to find something specific in a large hardcopy publication.

Obviously, search engines automate that task pretty well on the web by recording billions of documents and serving up the most relevant to our needs in a few milliseconds.

However, Google has taken it all a few steps further. With the advent of their Page Rank algorithm a few years back Google demonstrated its capacity for collecting multiple sources of information and building actionable data profiles. Google has since added to its profile arsenal by recording the specific surfing habits of its users and the websites on which they land. Combining the personal data it records about us with the data taken from a website (via analytics or just simply from standard Google searches), Google can now match us with content deemed even more relevant to our needs.

So Google has become a very intelligent content indexing system, delivering more and more ‘personalized’ results based on our surfing habits, our demographic and the performance of the websites to which we are referred.

Duplicate Content

It is no secret to any webmaster that one of the main technology hurdles for Google is duplicate content. But why should Google care about duplication if it’s large enough and fast enough to index pretty much everything on the web? Well, actually it isn’t (large enough or fast enough). And therein lies the problem.

Google needs to know the source of published content. As the author of a piece of content I should have precedence over everyone else who publishes it. Google needs to know who owns the content so it can give preference and prominence to the source and not to someone who has merely replicated it for their own self-interest or gain. It’s one of the most critical yardsticks that Google has to judge us by. If it gets the source wrong in its algorithm, all other measurements will result in a false or negatively weighted outcome. It can’t reward quality content fairly, if it doesn’t know who has authored it.

Unsurprisingly this isn’t something that we hear Google making a big deal about. Why? Because they don’t have and never will have a perfect working solution. But it’s clear from some of the algorithm and policy changes during 2011 that Google is working hard to improve its chances of determining the true source of content.

The first step in a series of new steps was for Google to make a basic assumption about Article Directories. Article Directories contain a lot of content and fared well under the old system of ranking. We all know by now that some of the key directories, EzineArticles for example, have taken a major hit under Google’s new system of ranking. In a certain sense the hit has been more about sending a message than it has been about cleaning up the web of duplicate content. In a way Google has behaved like a newly elected Government. When you’re trying to introduce a new way of thinking, it sometimes helps to make a few high-profile personnel changes. So Google has basically announced to the world that duplicate content is on its radar – learn the new rules or face the axe.

When you look closely at the results of Panda it’s fairly easy to work backwards and reverse-engineer the thought processes involved. Article Directories contain primarily duplicate content, but not entirely, so Google must have factored other information into its decision to devalue AD’s. If you look at the whole scenario, it can give you valuable clues as to where things are headed. There are two clear problems with Article Directories and the type of content they provide a home for:

1 – Duplication. Clearly, people create content, often for their own sites, then use multiple article directories to re-publish that same content, either in an attempt to gain backlinks, attract direct traffic or appeal to niche re-publishers of content (syndicators). Either which way it is duplicated, and the Article Directories are the catalyst for making that happen.

When you look at everything else contained in an AD (all non-duplicate content), you see the second problem –

2 – Poor quality content. When you search an article directory for something unique, what you’ll often find is something that doesn’t read too well. In many cases that is because it has been mechanically spun from previous content. So in terms of value to the searcher, it’s even less useful than the original, which has already been tagged as a dupe.

So clearly the Article Directories, and the way in which they operate, are not going to garner sympathy from Google, who’ve taken on the task of improving the quality of the web.

So where does this leave us WRT content publishing? What are the rules and how do we play the game?

Google can’t announce the new rules yet, because they haven’t finished writing them. In a way Google is just like an intelligent marketer trying to optimize his own business. Google makes changes, tests the results, realigns its approach based on gathered data, then tests again. To stay at the top of its game, this process is perpetual – it has to be.

How does that affect you, or how will it? First off you must not hide behind ‘well it’s worked for me for the last 5 years so it must be OK’ or by sticking your head in the sand and doing nothing. iFrame cloaking, IP cloaking/switching, Xrummer backlinking, etc. all worked for a while. These were strategies that worked and have since been marginalized (or are far along that path) by the Google team. So you need to take a look at your approach to publishing content. Even if you don’t use article directories or don’t provide a mechanism for people to republish your content, the new rules are still going to affect you. The good news is that if you’re smart enough, some good opportunities will start to appear.

There’s a new system of ranking search results being worked out right now which combines Site Authority and Page Rank, along with the newly collected data that Google has at its disposal.

So how exactly does it work?

I’ll be going into detail of how you can structure your content to achieve what I term a ‘High Google Credit Score’ in part 2 of this article, to be published soon. Or visit my website at webdesigndoorcounty.com/spn.html and request part 2 via email.


As an author and business owner with almost 14 years experience in the field of Internet Marketing, Carl Hruza has developed a number of successful web-based enterprises, and now makes his living by training other entrepreneurs to do the same. Learn more about the author at webdesigndoorcounty.com/spn.html

8 Responses to “Establishing Ownership of Your Content – The Rules are Changing — A SPN Exclusive Article

    avatar portugal property says:

    After reading your article, I posted the following suggestion on the Google Forum, I am not sure whether it’s valid or will even be read.

    Dear Google and Forum Members,

    I recently read that Google and other search engines have difficulty in identifying the original source of content, ie the writer.

    If this is the case, then the following suggestion may go someway in overcoming this.

    Similar to the copyright symbol, if the original writer puts the Google, Yahoo, Bing, etc or a universally accepted stamp on their original content, it can easily be found by Google, etc.

    If the original writer then puts the Google, Yahoo, Bing, etc, duplicate content symbol on their syndicated content, it too, can be found by the search engines.

    Therefore, I would suggest the following,
    For original content on Google the symbol would be (G) and for syndicated/duplicated (G2) or (Gc)
    For Yahoo (Y) and (Y2) or (Yc)
    Universally accepted stamp or web standard (U) or (Uc), (U2 could probably not be used due to copyright held by the band)

    There may also be a design that could also include a date stamp such as (G02/01/12) which may also help by proving the date that the original article/information was posted.

    Please forgive my crude symbols, they can be made to look much nicer and put within a whole circle rather than brackets.

    Thank you for taking the time to read this and for replying, if you choose to. (Gc02/01/12).

    PS. The original comment on the Google forum was stamped as (G02/01/12)

    avatar Criss says:

    In my experience the algorithm Google uses to detect dupes is pretty inaccurate, at least for my content.

    When I search for unique words contained in articles I publish I often see scraped, partial content ranking first than my site.

    I’ve read this is quite common.

    Since this issue does move money, and since this money pays workers’ bills, just the idea of changing the rules and ‘see what happens’ looks to me as madness. It is required, at least in my opinion, 1) thorough testing 2) a very close eye from public authorities, but also 3) trasnparency and 4) a helpdesk system to quickly fix problems.

    This said, I’m very interested in reading part 2 of the article and learning what these rules will be like.

    I think we need a technological solution that allows original content to be easily told from dupes.

    Please note that some transparency will help innocents to recover from punishments! A simple alert in Google’s webmaster tools with ‘your content looks like it is a dupe’ would have let site owner to quickly reckognize and fix wrong duplicate assumptions…

    Can’t wait. Always keen to fing out what Big G is planning. Nice article.

    avatar HR Crest says:

    Very informative article. It did set me thinking about my own content. Thanks for sharing the important information with us.

    avatar Carl Hruza says:

    Thanks for the feedback!
    I’ve had a really nice response from people for whom the article seemed to strike a chord. I’m putting the finishing touch to Part II and hopefully SiteProNews will be kind enough to publish it in a few days.
    The good news is that I’m really excited about what’s happening with Google and its new policies on rating content. I think there’s a fantastic opportunity for anyone who genuinely wants to create information worthy of being read by others. Part II will have some nice info for you web publishers and hopefully give you a head start on your competitors :)
    Many thanks to all who commented via email, over 100 great comments received today.
    Best wishes and the best of fortune to everyone in 2012 and beyond, and thanks again to the staff at SPN.

    Cheers

    Carl Hruza

    avatar Tom Aikins says:

    Thanks for the info. I just read part 2 as well and got a lot more ideas on how to combat this problem.

    avatar Sheila Fisher says:

    Thank you for writing this. My website is fairly new and I’ve been working to create links. I have just started thinking about writing and publishing articles. Do you think writing articles for article directories is still a worth while way to get traffic and back links, or is this strategy becoming obsolete?

    avatar Carl H says:

    Hello Sheila,
    It really is not worth your efforts any longer. If your site is in a really uncompetitive niche then you might pick up a little bit of benefit from the link, but it will basically be coming from a PR0 page. There are easier ways and better places to find backlinks and much better use for the content you have written. I hope to have my next article on ‘Article Directory Marketing’ finished pretty soon so watch out for that!

    Cheers

    Carl

Submit a Comment

Your email address will not be published. Required fields are marked *






You may use these HTML tags and attributes: <a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Please leave these two fields as-is:

Protected by Invisible Defender. Showed 403 to 4,243,368 bad guys.

css.php