请求机器人重新解析 robots.txt
我正在编写一个代理服务器,将 youtube.com 映射到另一个域(这样用户就可以轻松地从德国等国家/地区访问 youtube,而无需审查搜索结果和视频)。
不幸的是,我的 robots.txt
中有一个错误。现在已经修复了,但是Baiduspider 获取了我的旧robots.txt,并且几天来一直在尝试为整个网站建立索引。 因为 Youtube 是一个相当大的网站,我认为这个过程不会很快结束:-)
我已经尝试将 baiduspider 重定向到另一个页面并向其发送 404,但它已经解析到许多路径。
对此我能做什么?
I am writing a proxy server that maps youtube.com to another domain (so users can easily access youtube from countries like Germany without search results and videos being censored).
Unfortunately there was a bug in my robots.txt
. Its fixed now, but Baiduspider got my old robots.txt and has been trying to index the whole website for a couple of days.
Because Youtube is a quite big website, I don't think this process will end soon :-)
I already tried redirecting baiduspider to another page and sending it a 404, but it already parsed to many paths.
What can I do about this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
停止处理来自Baiduspider的请求
,并将lighttpd附加到lighttpd.conf,
迟早Baiduspider应该重新获取robots.txt
(参见http://blog.bauani.org/2008/10 /baiduspider-spider-english-faq.html)
Stop processing requests from Baiduspider
with lighttpd append to lighttpd.conf
sooner or later Baiduspider should refetch the robots.txt
(see http://blog.bauani.org/2008/10/baiduspider-spider-english-faq.html)