30,000+ Users
Thousands of digital professionals have installed Robots Exclusion Checker
Thousands of digital professionals have installed Robots Exclusion Checker
The Chrome extension is currently available in English, French and German.
REC has a 5 star rating on the Google Chrome store
I created Robots Exclusion Checker to help digital professionals save time in understanding whether a particular page on a website was accessible to Search Engines. It’s the first extension to combine all elements that affect indexation and crawling together, in one simple to use and visibly clear format.
It’s free to download in Google Chrome and is a must-have for digital professionals, particularly if you work within eCommerce or on large, complex sites.
“Super useful and better than my previous extension ‘see robots’…more extensive when checking if a page is actually no-indexed”
“Amazing extension! Saves me so much time and I love the new canonical feature. Highly recommended for SEO’s and devs to use!”
“I have needed something handy like this for a long time. It’s a real time saver! Highly recommended to all SEO’s out there”
“Top-notch plugin…best robots plugin on chrome store”
“I never write reviews…but I use this bad boy every day and it’s excellent.”
“A must-have Chrome extension for your SEO toolkit”
Multiple meta robots tags on a single page is a common issue with many platforms, such as WordPress or Magento. If you have more than one plugin installed offering robots control it can often result in multiple tags being inserted into the page, sometimes with conflicting directives. If the extension detects multiple tags, it will consolidate all values and show them to you within the extension.
But what if there are conflicting values? The extension simulates the way Google treats multiple values and uses the most restrictive value.
For instance, if the extension detects these tags:
<META NAME="ROBOTS" CONTENT="NOINDEX">
<META NAME="ROBOTS" CONTENT="INDEX">
We will flag the NOINDEX value.
Source: https://webmasters.googleblog.com/2007/03/using-robots-meta-tag.html.
Using a meta NOINDEX tag at a URL level will prevent Search Engines from indexing that URL but in order to detect the tag, which is located within the HTML
, the page needs to be crawled.If you are linking to a large number of pages from within your website that have a NOINDEX tag, they will all be crawled on a regular basis. A NOINDEX tag therefore does not impact Search Engine crawling but instead the indexation of pages.
In order for Search Engines to detect a meta NOINDEX tag, the URL must be crawled and HTML processed. When there is an exclusion in the robots.txt file, this prevents Search Engines from crawling and processing the URL, meaning they don’t get a chance to discover the NOINDEX tag.
The robots.txt file will prevent Google from crawling a specific page or set of pages. However, Google will still be aware that the URL exists – especially if you link to that URL internally, include the URL within your sitemap.xml or that URL is linked to from an external website.
In this scenario, a robots.txt will not prevent Google from indexing the URL itself. This could lead to the URL being returned in the Search Engine Results Pages (SERPs) with a warning that robotst.txt prevented them from crawling the page.