Google 抓取错误 - 无法访问错误
我的网站已成功通过 Google 网站管理员验证。我的 robots.txt 爬虫访问权限也是 200(成功)。但是,当我检查“抓取错误”时,几乎每个页面都是“无法访问”,包括域主页本身。唯一被抓取且没有错误的页面是附件/文件页面(例如 pdf、xls、jpg 等)。这实在是太奇怪了。
我的网站是由 Ruby on Rails 创建的,并使用 MySQL 数据库。
My site has been successfully verified against Google Webmaster. My crawler access with the robot.txt is also 200 (Success). However, when I check "Crawl errors", nearly every page is "unreachable", including the domain main page itself. The only page that gets crawled with no error are the attachment/file page (e.g. pdf, xls, jpg etc.). This is really strange.
My web is created by Ruby on Rails and using MySQL database.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
页面渲染时间是否较长?我怀疑如果页面响应时间太长,谷歌的抓取工具就会放弃。考虑将 Varnish 放在昂贵且不包含任何用户相关或动态内容的公共页面前面吗?
Do the pages take a long time to render? I suspect Google's crawler gives up if the page takes too long to respond. Consider putting Varnish in front of public pages that are expensive and don't contain any user-related or dynamic content?