Python爬行蜘蛛

发布于 2024-11-19 15:05:44 字数 1686 浏览 3 评论 0原文

我一直在学习如何使用 scrapy,尽管我一开始对 python 的经验很少。我开始学习如何使用 BaseSpider 进行抓取。现在我正在尝试抓取网站,但我遇到了一个让我非常困惑的问题。以下是来自官方网站 http://doc.scrapy.org/topics/spiders 的示例代码.html

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by     default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php',     ))),

        # Extract links matching 'item.php' and parse them with the spider's method     parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),)

    def parse_item(self, response):
        print "WHY WONT YOU WORK!!!!!!!!"
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = TestItem()
        item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
        item['name'] = hxs.select('//td[@id="item_name"]/text()').extract()
        item['description'] =     hxs.select('//td[@id="item_description"]/text()').extract()
        return item

我所做的唯一更改是语句:

print "WHY WONT YOU WORK!!!!!!!!"

但是由于我在运行时没有看到此打印语句,因此我担心无法访问此函数。这是我直接从scrapy官方网站获取的代码。我做错了什么或误解了什么?

I've been learning how to use scrapy though I had minimal experience in python to begin with. I started learning how to scrape using the BaseSpider. Now I'm trying to crawl websites but I've encountered a problem that has really confuzzled me. Here is the example code from the official site at http://doc.scrapy.org/topics/spiders.html.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by     default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php',     ))),

        # Extract links matching 'item.php' and parse them with the spider's method     parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),)

    def parse_item(self, response):
        print "WHY WONT YOU WORK!!!!!!!!"
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = TestItem()
        item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
        item['name'] = hxs.select('//td[@id="item_name"]/text()').extract()
        item['description'] =     hxs.select('//td[@id="item_description"]/text()').extract()
        return item

The only change I made is the statement:

print "WHY WONT YOU WORK!!!!!!!!"

But since I'm not seeing this print statement at runtime, I fear that this function isn't being reached. This is the code I took directly from the official scrapy site. What am I doing wrong or misunderstanding?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

奶茶白久 2024-11-26 15:05:44
start_urls = ['http://www.example.com']

example.com 没有任何类别或项目的链接。这只是抓取的网站 URL 的一个示例。

这是文档中的非工作示例。

start_urls = ['http://www.example.com']

example.com doesn't have any links for categories or items. This is just an example of what a scraped site URL might be.

This is a non-working example in the documentation.

一杆小烟枪 2024-11-26 15:05:44

您可以尝试制作一个您知道可以工作的蜘蛛,并查看 print 语句是否可以在您拥有的地方执行任何操作。我想我记得很久以前就尝试过做同样的事情,即使代码被执行,它们也不会出现。

You might try making a spider that you know works, and see if print statements do anything where you have them. I think I remember trying to do the same thing a long time ago, and that they wont show up, even if the code is executed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文