python中的Scrapy Crawler无法跟踪链接?

发布于 2024-10-20 17:32:22 字数 2719 浏览 5 评论 0原文

我使用python的scrapy工具用python写了一个爬虫。以下是 python 代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
#from scrapy.item import Item
from a11ypi.items import AYpiItem

class AYpiSpider(CrawlSpider):
        name = "AYpi"
        allowed_domains = ["a11y.in"]
        start_urls = ["http://a11y.in/a11ypi/idea/firesafety.html"]

        rules =(
                Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
                )

        def parse_item(self,response):
                #filename = response.url.split("/")[-1]
                #open(filename,'wb').write(response.body)
                #testing codes ^ (the above)

                hxs = HtmlXPathSelector(response)
                item = AYpiItem()
                item["foruri"] = hxs.select("//@foruri").extract()
                item["thisurl"] = response.url
                item["thisid"] = hxs.select("//@foruri/../@id").extract()
                item["rec"] = hxs.select("//@foruri/../@rec").extract()
                return item

但是,抛出的错误不是跟随链接而是:

Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/commands/crawl.py", line 45, in run
    q.append_spider_name(name, **opts.spargs)
--- <exception caught here> ---
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/queue.py", line 89, in append_spider_name
    spider = self._spiders.create(name, **spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/spidermanager.py", line 36, in create
    return self._spiders[spider_name](**spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 38, in __init__
    self._compile_rules()
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 82, in _compile_rules
    self._rules = [copy.copy(r) for r in self.rules]
exceptions.TypeError: 'Rule' object is not iterable

有人可以向我解释一下发生了什么事吗?由于这是文档中提到的内容,并且我将允许字段留空,因此默认情况下它本身应该遵循 True。那么为什么会出现错误呢?我可以对爬虫进行哪些优化以使其速度更快?

I wrote a crawler in python using the scrapy tool of python. The following is the python code:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
#from scrapy.item import Item
from a11ypi.items import AYpiItem

class AYpiSpider(CrawlSpider):
        name = "AYpi"
        allowed_domains = ["a11y.in"]
        start_urls = ["http://a11y.in/a11ypi/idea/firesafety.html"]

        rules =(
                Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
                )

        def parse_item(self,response):
                #filename = response.url.split("/")[-1]
                #open(filename,'wb').write(response.body)
                #testing codes ^ (the above)

                hxs = HtmlXPathSelector(response)
                item = AYpiItem()
                item["foruri"] = hxs.select("//@foruri").extract()
                item["thisurl"] = response.url
                item["thisid"] = hxs.select("//@foruri/../@id").extract()
                item["rec"] = hxs.select("//@foruri/../@rec").extract()
                return item

But, instead of following the links the error thrown is:

Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/commands/crawl.py", line 45, in run
    q.append_spider_name(name, **opts.spargs)
--- <exception caught here> ---
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/queue.py", line 89, in append_spider_name
    spider = self._spiders.create(name, **spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/spidermanager.py", line 36, in create
    return self._spiders[spider_name](**spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 38, in __init__
    self._compile_rules()
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 82, in _compile_rules
    self._rules = [copy.copy(r) for r in self.rules]
exceptions.TypeError: 'Rule' object is not iterable

Can someone please explain to me what's going on? Since this is the stuff mentioned in the documentation and I leave the allow field blank, that itself should make follow True by default. So why the error? What kind of optimisations can I make with my crawler to make it fast?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

十秒萌定你 2024-10-27 17:32:22

从我看来,您的规则似乎不是可迭代的。看起来您试图将规则设为元组,您应该 阅读关于 python 文档中的元组

要解决您的问题,请将此行更改

    rules =(
            Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
            )

    rules =(Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item'),)

:注意到末尾的逗号了吗?

From what I see, it looks like your rule is not an iterable. It looks like you were trying to make rules a tuple, you should read up on tuples in the python documentation.

To fix your problem, change this line:

    rules =(
            Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
            )

To:

    rules =(Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item'),)

Notice the comma at the end?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文