Crawled - currently not indexed - Fake URLs
We have several URLs in the page section with the error 'Crawled - currently not indexed.' These URLs do not exist, and I have no idea how Google bots are crawling them. How can I prevent Google bots from crawling URLs that do not exist?
All of these URLs are fake and have never existed on the website before.
-
Use URL Inspection Tool in Google Search Console to identify discovery sources.
-
Check internal links with tools like Screaming Frog to find references to these URLs.
-
Audit external backlinks using Ahrefs or SEMrush to locate any external links pointing to these URLs.
-
Remove outdated URLs from your sitemap and resubmit it in Google Search Console.
-
Serve proper 404 or 410 responses for nonexistent URLs.
-
Monitor server logs to trace how Googlebot is crawling these URLs.
-
Use robots.txt to block crawling of unnecessary patterns if applicable.
-
Remove problematic URLs using Google Search Console's URL Removal Tool.