Back to glossary

X-Robots-Tag

The X-Robots-Tag is an HTTP header directive that provides instructions to search engine crawlers on how to index or follow links on a specific page or resource.

It serves a similar purpose as the meta robots tag, but it allows you to apply directives to non-HTML files such as PDFs, images, and other types of documents.

The tag can be added to the HTTP header response for a specific URL through server configurations or scripting languages. Here are a few examples of how to implement the X-Robots-Tag using different methods:

Apache .htaccess

To add the X-Robots-Tag to an Apache server’s .htaccess file, include the following code:

<FilesMatch "\.(pdf|jpg|jpeg|gif|png)$">
Header set X-Robots-Tag "noindex, nofollow"
</FilesMatch>

This example sets the X-Robots-Tag to “noindex, nofollow” for PDF, JPG, JPEG, GIF, and PNG files.

Nginx

For Nginx, you can add the X-Robots-Tag directive in the server configuration file:

location ~* \.(pdf|jpg|jpeg|gif|png)$ {
    add_header X-Robots-Tag "noindex, nofollow";
}

PHP

To add the X-Robots-Tag using PHP, you can include the following code at the beginning of your script:

header('X-Robots-Tag: noindex, nofollow');

This will set the X-Robots-Tag to “noindex, nofollow” for the PHP page.

Some common X-Robots-Tag values include:

  • noindex: Prevents search engines from indexing the page or resource.
  • nofollow: Instructs search engines not to follow any links on the page or resource.
  • noarchive: Prevents search engines from showing a cached version of the page or resource.
  • nosnippet: Tells search engines not to display a text snippet or video preview in search results.

You can combine multiple directives by separating them with a comma, as shown in the examples above.

External resources:

  1. Google’s guide on using X-Robots-Tag
  2. Moz’s guide on X-Robots-Tag