Google has introduced a new feature to Webmaster Tools that shows webmasters exactly which resources on their websites are blocked from crawling.
Why has this update come about?
If Googlebot cannot read a resource on a page, this is usually because a website's robots.txt file has disallowed it to be crawled. This means that Google cannot index the page properly, and therefore it can have a negative effect on how the page is ranked and rendered (presented to users).
The new Blocked Resources Report
Google now produces a Blocked Resources Report for webmasters, which states exactly which resources are blocked. It will also provide detailed guidance on how the webmaster can go about diagnosing these problems and finally unblocking the resources.
The report shows only the resources that could be under the webmaster's control, as those are the only ones which can be addressed.
On its Webmaster Central Blog, Google advised: "Because it can be time-consuming (usually not for technical reasons!) to update all robots.txt files, we recommend starting with the resources that make the most important visual difference when blocked."