13.3. robotparser — Parser for robots.txt - Python 2.7.18 documentation 编辑
Note
The robotparser
module has been renamed urllib.robotparser
in Python 3. The 2to3 tool will automatically adapt imports when converting your sources to Python 3.
This module provides a single class, RobotFileParser
, which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the robots.txt
file. For more details on the structure of robots.txt
files, see http://www.robotstxt.org/orig.html.
- class
robotparser.
RobotFileParser
(url='') This class provides methods to read, parse and answer questions about the
robots.txt
file at url.set_url
(url)Sets the URL referring to a
robots.txt
file.
read
()Reads the
robots.txt
URL and feeds it to the parser.
parse
(lines)Parses the lines argument.
can_fetch
(useragent, url)Returns
True
if the useragent is allowed to fetch the url according to the rules contained in the parsedrobots.txt
file.
mtime
()Returns the time the
robots.txt
file was last fetched. This is useful for long-running web spiders that need to check for newrobots.txt
files periodically.
modified
()Sets the time the
robots.txt
file was last fetched to the current time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser >>> rp = robotparser.RobotFileParser() >>> rp.set_url("http://www.musi-cal.com/robots.txt") >>> rp.read() >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") False >>> rp.can_fetch("*", "http://www.musi-cal.com/") True
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论