从脚本运行 Scrapy - 挂起

发布于 2024-11-17 14:37:57 字数 260 浏览 3 评论 0原文

我正在尝试从此处讨论的脚本运行 scrapy 。它建议使用 this 片段,但是当我这样做时,它会无限期地挂起。这是在 .10 版本中写回的;它仍然与当前的稳定版本兼容吗?

I'm trying to run scrapy from a script as discussed here. It suggested using this snippet, but when I do it hangs indefinitely. This was written back in version .10; is it still compatible with the current stable?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

攒一口袋星星 2024-11-24 14:37:57
from scrapy import signals, log
from scrapy.xlib.pydispatch import dispatcher
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
from scrapy.http import Request

def handleSpiderIdle(spider):
    '''Handle spider idle event.''' # http://doc.scrapy.org/topics/signals.html#spider-idle
    print '\nSpider idle: %s. Restarting it... ' % spider.name
    for url in spider.start_urls: # reschedule start urls
        spider.crawler.engine.crawl(Request(url, dont_filter=True), spider)

mySettings = {'LOG_ENABLED': True, 'ITEM_PIPELINES': 'mybot.pipeline.validate.ValidateMyItem'} # global settings http://doc.scrapy.org/topics/settings.html

settings.overrides.update(mySettings)

crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()

class MySpider(BaseSpider):
    start_urls = ['http://site_to_scrape']
    def parse(self, response):
        yield item

spider = MySpider() # create a spider ourselves
crawlerProcess.queue.append_spider(spider) # add it to spiders pool

dispatcher.connect(handleSpiderIdle, signals.spider_idle) # use this if you need to handle idle event (restart spider?)

log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."

更新:

如果您还需要对每个蜘蛛进行设置,请参阅此示例:

for spiderConfig in spiderConfigs:
    spiderConfig = spiderConfig.copy() # a dictionary similar to the one with global settings above
    spiderName = spiderConfig.pop('name') # name of the spider is in the configs - i can use the same spider in several instances - giving them different names
    spiderModuleName = spiderConfig.pop('spiderClass') # module with the spider is in the settings
    spiderModule = __import__(spiderModuleName, {}, {}, ['']) # import that module
    SpiderClass = spiderModule.Spider # spider class is named 'Spider'
    spider = SpiderClass(name = spiderName, **spiderConfig) # create the spider with given particular settings
    crawlerProcess.queue.append_spider(spider) # add the spider to spider pool

蜘蛛文件中的设置示例:

name = punderhere_com    
allowed_domains = plunderhere.com
spiderClass = scraper.spiders.plunderhere_com
start_urls = http://www.plunderhere.com/categories.php?
from scrapy import signals, log
from scrapy.xlib.pydispatch import dispatcher
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
from scrapy.http import Request

def handleSpiderIdle(spider):
    '''Handle spider idle event.''' # http://doc.scrapy.org/topics/signals.html#spider-idle
    print '\nSpider idle: %s. Restarting it... ' % spider.name
    for url in spider.start_urls: # reschedule start urls
        spider.crawler.engine.crawl(Request(url, dont_filter=True), spider)

mySettings = {'LOG_ENABLED': True, 'ITEM_PIPELINES': 'mybot.pipeline.validate.ValidateMyItem'} # global settings http://doc.scrapy.org/topics/settings.html

settings.overrides.update(mySettings)

crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()

class MySpider(BaseSpider):
    start_urls = ['http://site_to_scrape']
    def parse(self, response):
        yield item

spider = MySpider() # create a spider ourselves
crawlerProcess.queue.append_spider(spider) # add it to spiders pool

dispatcher.connect(handleSpiderIdle, signals.spider_idle) # use this if you need to handle idle event (restart spider?)

log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."

UPDATE:

If you need to have also settings per spider see this example:

for spiderConfig in spiderConfigs:
    spiderConfig = spiderConfig.copy() # a dictionary similar to the one with global settings above
    spiderName = spiderConfig.pop('name') # name of the spider is in the configs - i can use the same spider in several instances - giving them different names
    spiderModuleName = spiderConfig.pop('spiderClass') # module with the spider is in the settings
    spiderModule = __import__(spiderModuleName, {}, {}, ['']) # import that module
    SpiderClass = spiderModule.Spider # spider class is named 'Spider'
    spider = SpiderClass(name = spiderName, **spiderConfig) # create the spider with given particular settings
    crawlerProcess.queue.append_spider(spider) # add the spider to spider pool

Example of settings in the file for spiders:

name = punderhere_com    
allowed_domains = plunderhere.com
spiderClass = scraper.spiders.plunderhere_com
start_urls = http://www.plunderhere.com/categories.php?
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文