Replies: 2 comments
-
could be any number of things as i’m not terribly familiar with goog’s search console.. what’s it saying the problem is? i spoze it could be a mime type thing, or they don’t conform completely to a schema or they’re not excluded from robots.txt or they’re not sparkly enough :) point being there could be a few causes and without more information on the actual problem it’s hard to say much |
Beta Was this translation helpful? Give feedback.
-
Here's one example I get for the 'crawled - not currently indexed' error: When I added a 'Disallow' statement for the path it changed to this: Maybe this is the solution? In the meantime - I updated my robots.txt to exclude xml files: https://github.com/bugok/blog/blob/main/layouts/robots.txt @wolfspyre : What do you think? |
Beta Was this translation helpful? Give feedback.
-
I have a congo-based site: https://www.noamlerner.com. The code for the site is here: https://github.com/bugok/blog
The site is deployed using cloudflare pages.
I'm tracking my site in Google Search Console - and I see that there are many failures related to crawling xml files:
I'm not familiar enough with congo to understand if the xml files should be generated or not. Are they used to render the content of the site? If they are needed - what's the desired solution here to make Google Search Console happy? Ignore those xml files in robots.txt? Some other way?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions