Decrypting the Myth of Link Disavowals, Penalties & More

A few days ago there was an event by the name of Google Webmaster Hangout on Air (video) shown by John Mueller, giving details of his own version of disapproval about the popular “Myths” regarding penalties, disavowals and more.

Now before we move on to what was said in the video and my version of the story, here is a little insight about the type of bad links that you might want to remove and there specific types

1.      It may be a spam on blog, forum, comment or some other form that was actually paid by you to someone.

2.      You might be in an acquaintance of the owner or you would have been paying for that kind of links to him.

3.      It might be a directory of articles.

John Muller’s views on Link Disavowal

You Should Only Disavow Pages Of A Domain, Not Entire Domains. It might seem to be tricky but its necessary because disavowing a whole domain can even deprive of you of us quality links as well. Your task might not be complete by disavowing a single link because it might have been duplicated on different positions in the site. Commands like

Site: “anchor text”

(Anchor text is actually specific text that your site is linked to)

It might also help to search for your own brand name this way so that to assure you that all links are being collected by you. Just in case you find out that your link is not just a mere spam you should look into the fact of disavowing the whole domain.

Re inclusion can be comprised by multiple requests of Reconsideration. Actually john was against this Notion because he thought that every request is considered indigenously which makes even multiple requests to be eligible.

You must look for Multiple Source Links as Google Webmaster is just not enough. No this Myth was highly cynical for John as he said that you must look for links in the Tools of Google Webmaster as a guideline. Just in case you witness a pattern through which your website is pointed by Links then you can explore further links

Inbound Link value can be removed if we use Robots.txt in order to block a page For john it is entirely baseless claim as he says that if a search engines is not able to crawl a page, they can’t be able to see that whether it is nonindexed or not. He says that to remove a link value you must include a robots= “nonindex” in the head of specific page.
If you ask me 301 redirect is the best way to stop the flow of page rank. Might seem to be a tedious solution but it comes quite handy especially when you are facing a competition with established links with an intermediary page.