New Googles Ranking Algorithm
Personally, I would do (this is 100% my personal opinion and not necessarily backed by others or by concrete data and tests, apart from my own subjective observations):
Trust / host domain Authority: 35%
determined by the factors, the domain authority, I would like to mention: the number and quality of links, coverage of the topic link sources, IP and Class C distribution (number of different IP addresses in class C by the total number of IP addresses distributed), the number of IP addresses by the number of shared domains.
Link popularity site-specific: 20%
Here, I would say we have to examine the quantity and quality first.
Anchor text of external links on the page: 15%
Varies depending on the quality of links and convergence (in percent of the compounds with the same / similar / sub / superset of the number of linking words – greater convergence means that Google can be trusted more than the page is about particular keyword).
Anchor text of internal links on the page: 5%
Could increase if the area of confidence and coverage of issues are raised and / or to determine if enough useful external links anchor text, which is the site, or if these sites are of low quality (do not rely too ).
Subject coverage: 20%
Topics I am reporting the amount of content / information on the keyword (topic of interest), refers to a website. Basically, a site with 10 pages on “remove” to 10000 or through various other topicss (most independent) have a much shorter range on the ranking of a page of 1000 pages on the theme ” weight “and 10 pages on various other topics. Basically, it shows how much work from a “specialist” of the site is a problem, as a comparison butcher a hobby of quantum mechanics, a physics researcher at CERN with the theme of mechanical quantum.
Use the keyword-page: 5%, according to the trust domain, subject coverage, anchor text similarity
I think it is a variable weighting factor. If the confidence interval (and other factors) is high, Google “trust” your site, so the confidence that the word on the page used significantly for visitors, not simply trying to better place (spam) . For this reason, in practice, the influence of this factor for an extra turn of the results of similar rank (based on other factors, particularly the links cover the subject, the anchor text).
Clerk of data and home: reviews
This is important because it operates in a pass / fail manner. It is used as part of the anti-spam algorithm to determine if a site is the result of spam. If the test is successful, other factors used to classify the site. If it fails the test, it receives a ranking of “punishment” (ranked much lower, or does not rank for that keyword or keyword, the results are clearly not spam). Obviously, this is not the only factor in a pass / fail so used. graphic analysis Link, which connects models, content models, etc. are also used as factors spam.
Traffic data + CTR: Low, Tie-Break
I see the traffic statistics is difficult, reliable as a ranking factor. An area with lots of traffic (based on Google Analytics) is not necessarily better coverage on a specific topic as an area with low traffic.
Data can SERP CTR secondly, I think that can be used as an indicator of the quality of the SERP declaration (call it seems that how the search ends), so it might have a minor impact on the ranking algorithm. But I think it’s more an approach used in tiebreaker – if there are 2 results are similar to other factors, Google will look for that data to one of them as a best answer to the plaintiffs.
Social Graph measures: I have no idea
This is not my thing. Ever social optimization, I have no opinion about him, except that it is much less important than other factors mentioned above. However, it is logical to take into account to some extent, Google, even if only a little over your domain name, a URL is called from your site that is not a clickable hyperlink to your brand, etc.
An important point that I mean I really think that the algorithm is not linear. In other words, the factors are not taken separately, but they depend on other factors. In simple terms it goes something like “If the F1 factor meets function applies F1C1 F1 (F1), true even if the criteria apply F1c2 f2 (F1), otherwise, if F3 (F1, F2, F3 ) = X f4 (F1) – ok, maybe not as clear, but he read several times, and you should understand what I mean.
the calculation of certain factors that are generally taken together to an intermediate result are the same type (eg all the factors that have to do with links – to anchor text, IP-distribution), but it makes sense for “independent” factors associated with (for example, more weight on the final result (SERP’s), if there is a clear match between several factors (links anchor text, keywords, page title, keywords tag title)) are.
Post a Comment