以下链接,Scrapy 网络爬虫框架

发布于 2024-11-18 18:42:28 字数 185 浏览 3 评论 0原文

在阅读了几次 Scrapy 文档之后,我仍然没有意识到使用 CrawlSpider 规则和在回调方法上实现我自己的链接提取机制之间的区别。

我即将使用后一种方法编写一个新的网络爬虫,但只是因为我在过去使用规则的项目中经历了不好的经历。我真的很想知道我在做什么以及为什么。

有人熟悉这个工具吗?

感谢您的帮助!

After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method.

I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why.

Anyone familiar with this tool?

Thanks for your help!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

凑诗 2024-11-25 18:42:28

CrawlSpider继承了BaseSpider。它只是添加了提取和跟踪链接的规则。
如果这些规则对您来说不够灵活 - 使用 BaseSpider:

class USpider(BaseSpider):
    """my spider. """

    start_urls = ['http://www.amazon.com/s/?url=search-alias%3Dapparel&sort=relevance-fs-browse-rank']
    allowed_domains = ['amazon.com']

    def parse(self, response):
        '''Parse main category search page and extract subcategory search link.'''
        self.log('Downloaded category search page.', log.DEBUG)
        if response.meta['depth'] > 5:
            self.log('Categories depth limit reached (recursive links?). Stopping further following.', log.WARNING)

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select("//div[@id='refinements']/*[starts-with(.,'Department')]/following-sibling::ul[1]/li/a[span[@class='refinementLink']]/@href").extract()
        for subcategory in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            yield Request(subcategorySearchLink, callback = self.parseSubcategory)

    def parseSubcategory(self, response):
        '''Parse subcategory search page and extract item links.'''
        hxs = HtmlXPathSelector(response)

        for itemLink in hxs.select('//a[@class="title"]/@href').extract():
            itemLink = urlparse.urljoin(response.url, itemLink)
            self.log('Requesting item page: ' + itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            yield Request(nextPageLink, callback = self.parseSubcategory)
        except:
            self.log('Whole category parsed: ' + categoryPath, log.DEBUG)

    def parseItem(self, response):
        '''Parse item page and extract product info.'''

        hxs = HtmlXPathSelector(response)
        item = UItem()

        item['brand'] = self.extractText("//div[@class='buying']/span[1]/a[1]", hxs)
        item['title'] = self.extractText("//span[@id='btAsinTitle']", hxs)
        ...

即使 BaseSpider 的 start_url 对您来说不够灵活,也可以覆盖 start_requests 方法。

CrawlSpider inherits BaseSpider. It just added rules to extract and follow links.
If these rules are not enough flexible for you - use BaseSpider:

class USpider(BaseSpider):
    """my spider. """

    start_urls = ['http://www.amazon.com/s/?url=search-alias%3Dapparel&sort=relevance-fs-browse-rank']
    allowed_domains = ['amazon.com']

    def parse(self, response):
        '''Parse main category search page and extract subcategory search link.'''
        self.log('Downloaded category search page.', log.DEBUG)
        if response.meta['depth'] > 5:
            self.log('Categories depth limit reached (recursive links?). Stopping further following.', log.WARNING)

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select("//div[@id='refinements']/*[starts-with(.,'Department')]/following-sibling::ul[1]/li/a[span[@class='refinementLink']]/@href").extract()
        for subcategory in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            yield Request(subcategorySearchLink, callback = self.parseSubcategory)

    def parseSubcategory(self, response):
        '''Parse subcategory search page and extract item links.'''
        hxs = HtmlXPathSelector(response)

        for itemLink in hxs.select('//a[@class="title"]/@href').extract():
            itemLink = urlparse.urljoin(response.url, itemLink)
            self.log('Requesting item page: ' + itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            yield Request(nextPageLink, callback = self.parseSubcategory)
        except:
            self.log('Whole category parsed: ' + categoryPath, log.DEBUG)

    def parseItem(self, response):
        '''Parse item page and extract product info.'''

        hxs = HtmlXPathSelector(response)
        item = UItem()

        item['brand'] = self.extractText("//div[@class='buying']/span[1]/a[1]", hxs)
        item['title'] = self.extractText("//span[@id='btAsinTitle']", hxs)
        ...

Even if BaseSpider's start_urls are not enough flexible for you, override start_requests method.

沐歌 2024-11-25 18:42:28

如果您想要选择性爬行,例如获取分页的“下一个”链接等,最好编写自己的爬行器。但对于一般的爬行,您应该使用crawlspider并使用Rules & 过滤掉不需要遵循的链接。 process_links 函数。

看一下 \scrapy\contrib\spiders\crawl.py 中的crawlspider代码,它并不太复杂。

If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function.

Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文