Is there a way to prevent Googlebot from indexing certain parts of a page?

Is it possible to fine-tune directives to Google to such an extent that it will ignore part of a page, yet still index the rest?

There are a couple of different issues we've come across which would be helped by this, such as:

  • RSS feed/news ticker-type text on a page displaying content from an external source
  • users entering contact phone etc. details who want them visible on the site but would rather they not be google-able

I'm aware that both of the above can be addressed via other techniques (such as writing the content with JavaScript), but am wondering if anyone knows if there's a cleaner option already available from Google?

I've been doing some digging on this and came across mentions of googleon and googleoff tags, but these seem to be exclusive to Google Search Appliances.

Does anyone know if there's a similar set of tags to which Googlebot will adhere?

Edit: Just to clarify, I don't want to go down the dangerous route of cloaking/serving up different content to Google, which is why I'm looking to see if there's a "legit" way of achieving what I'd like to do here.


What you're asking for, can't really be done, Google either takes the entire page, or none of it.

You could do some sneaky tricks though like insert the part of the page you don't want indexed in an iFrame and use robots.txt to ask Google not to index that iFrame.

In short NO - unless you use cloaking with is discouraged by Google.

Please check out the official documentation from here

Go to section "Excluding Unwanted Text from the Index"

<!--googleoff: index-->
here will be skipped
<!--googleon: index-->

Found useful resource for using certain duplicate content and not to allow index by search engine for such content.

<p>This is normal (X)HTML content that will be indexed by Google.</p>

<!--googleoff: index-->

<p>This (X)HTML content will NOT be indexed by Google.</p>

<!--googleon: index>

At your server detect the search bot by IP using PHP or ASP. Then feed the IP addresses that fall into that list a version of the page you wish to be indexed. In that search engine friendly version of your page use the canonical link tag to specify to the search engine the page version that you do not want to be indexed.

This way the page with the content that do want to be index will be indexed by address only while the only the content you wish to be indexed will be indexed. This method will not get you blocked by the search engines and is completely safe.

Yes definitely you can stop Google from indexing some parts of your website by creating custom robots.txt and write which portions you don't want to index like wpadmins, or a particular post or page so you can do that easily by creating this robots.txt file .before creating check your site robots.txt for example

All search engines either index or ignore the entire page. The only possible way to implement what you want is to:

(a) have two different versions of the same page

(b) detect the browser used

(c) If it's a search engine, serve the second version of your page.

This link might prove helpful.

There are meta-tags for bots, and there's also the robots.txt, with which you can restrict access to certain directories.

Need Your Help

Docusign connect, need to buy SSL certificate?

tomcat7 docusignapi

For all docusign developers outthere. I'm using Docusign Connect to check the status of the envelop. I develop a listener that runs on tomcat. The reason why I used this because it's easier to code...

I need some advice how to connect external database with my app and web page


Im creating an app in which users could browse from many sale offers with cars.