Have you ever before needed to avoid Google from indexing a particular LINK on your web site as well as presenting it in their internet search engine results pages (SERPs)? If you manage web sites enough time, a day will likely come when you require to understand how to do this.
The 3 methods most generally utilized to avoid the indexing of a LINK by Google are as follows:
Utilizing the rel=” nofollow” feature on all anchor elements utilized to connect to the page to stop the links from being complied with by the crawler.
Utilizing a disallow directive in the website’s robots.txt file to stop the page from being crawled and indexed.
Utilizing the meta robots mark with the web content=” noindex” attribute to stop the page from being indexed.
While the differences in the three techniques appear to be subtle at first look, the efficiency can vary considerably depending on which method you select.
Making use of rel=” nofollow” to avoid Google indexing
Several inexperienced web designers attempt to avoid google index download Google from indexing a specific LINK by utilizing the rel=” nofollow” quality on HTML anchor components. They add the attribute to every anchor element on their site utilized to link to that LINK.
Consisting of a rel=” nofollow” characteristic on a link avoids Google’s spider from complying with the web link which, in turn, stops them from discovering, creeping, as well as indexing the target page. While this method might work as a temporary option, it is not a viable long-lasting service.
The defect with this technique is that it thinks all inbound web links to the LINK will certainly include a rel=” nofollow” feature. The web designer, however, has no way to stop various other internet site from linking to the URL with a followed web link. So the possibilities that the LINK will eventually obtain crawled and indexed utilizing this approach is fairly high.
Making use of robots.txt to avoid Google indexing
Another common method made use of to avoid the indexing of an URL by Google is to utilize the robots.txt file. A disallow regulation can be contributed to the robots.txt apply for the LINK in question. Google’s spider will honor the instruction which will avoid the web page from being crept as well as indexed. In some cases, nevertheless, the LINK can still appear in the SERPs.
In some cases Google will certainly display an URL in their SERPs though they have never ever indexed the components of that page. If sufficient web sites link to the URL then Google can commonly presume the topic of the page from the web link message of those inbound links. Therefore they will certainly reveal the LINK in the SERPs for relevant searches. While using a disallow instruction in the robots.txt file will certainly prevent Google from crawling and indexing an URL, it does not assure that the URL will certainly never show up in the SERPs.
Utilizing the meta robots identify to avoid Google indexing
If you require to avoid Google from indexing an URL while additionally stopping that URL from being presented in the SERPs then the most reliable strategy is to use a meta robots identify with a material=”noindex” attribute within the head component of the web page. Of course, for Google to actually see this meta robotics identify they require to initially have the ability to discover as well as creep the page, so do not block the LINK with robots.txt. When Google creeps the web page as well as finds the meta robots noindex tag, they will certainly flag the URL to ensure that it will never be shown in the SERPs. This is the most efficient means to stop Google from indexing an URL as well as displaying it in their search results page.