Site   Web

December 27, 2010

Search Engine Algorithms : It’s Evolution

Heavy Reliance on On-Page Factors – Webmaster Supplied Information

At the outset of the various search engine algorithms, there was heavy reliance on webmaster supplied information in determining search engine rankings. Webmasters submitted their site url’s to the search engines who sent their crawlers to extract a snapshot of the website for inclusion in the data base on its server. This then formed the basis of extraction of relevant information, primarily

– Keyword meta tags which listed webmaster supplied targeted keywords incorporated in the html (source codes) of his website.

– Also, the keywords density on the website was another major consideration.

The idea of webmasters optimizing for the search engines in order to improve search engine rankings first arose about mid-1990’s when webmasters started seeing the necessity for ranking high in the search engines as a funnel for traffic to their websites with an ultimate view of making sales. It thus did not take long before all the initial basis for search engine algorithms was abused by unscrupulous webmasters who made many actual searches become irrelevant for their search terms by using various black hat techniques such as keywords stuffing, inserting meta tags irrelevant to actual web page content, cloaking etc. Leaving search results highly irrelevant for searches would spell the doom of the search engines as search users would find alternative search sources. The search engines therefore had no choice than to deduce other complex and more effective factors as a basis for their search engine algorithms.

Reliance on On-Page and Off-Page Factors

In response to the demands of the time, Larry Page and Sergey Brin who were graduate students of Stanford University and later promoters of Google, developed a search engine that relied on a mathematical algorithm to rate the prominence of web pages. They reasoned that the prominence of a web page was based on the quantity and quality of links on other sites leading to that website.

They considered it more likely that random web surfers browsing the net would likely reach a higher-PageRanked web page via its various backlinks on other sites, than a lower-PageRanked web page.

With Google formation in 1998, this formed the basis of the new search engine algorithms and together with other off-page factors such as hyperlink analysis and on-page factors such as keyword frequency, meta tags, headings, internal links and site structure, formed major components of the now more complex search engine algorithms, to ensure Google and other search engines avoid the kind of manipulation seen in search engines when they only considered on-page factors for their rankings.

Webmasters, ever alert to ways of gaming the system, soon set up websites in thousands with a view to exchanging, buying and selling links and transferring PageRank. All sort of link spamming techniques evolved.

Increased number of on-page and off-page search engine algorithms components.

By 2005, Google had increased the number and complexity of the various components in its search engine algorithms calculations. The number is said to exceed 200. It introduced the nofollow tag and announced a campaign against paid links that transfer PageRank.

Introduction of Google Instant And Other Measures

By 2009, Google began using the web search history of all its users in order to populate search results. Real-time-search was introduced in late 2009 in an attempt to make search results more timely and relevant. With the growth in popularity of social media sites and blogs, the leading search engines made changes to their search engine algorithms to allow fresh content to rank quickly within the search results. This new approach to search places importance on current, fresh and unique content.

Why not visit => where many internet marketing newbies are attaining internet marketing success. Dele Ojewumi is an Internet Marketer, Chartered Accountant and Economist and the webmaster of =>