随着时间的推移递归递归回调

发布于 2025-02-01 22:53:19 字数 540 浏览 3 评论 0原文

我想刮擦一个网站,该网站每5分钟删除给定网页的信息。我通过在递归回调之间添加5分钟的睡眠时间来实现这一点,但是

    def _parse(self, response):

        status_loader = ItemLoader(Status())

        # perform parsing        

        yield status_loader.load_item()


        time.sleep(5)
        yield scrapy.Request(response._url,callback=self._parse,dont_filter=True,meta=response.meta)

,将时间添加时间 。出于某种原因,废纸确实发送了请求,但是收益项并未(或很少)输出到给定的输出文件。

我当时认为这与废弃的请求优先级有关,这可能优先于发送新请求而不是产生刮擦物品。会这样吗?我试图编辑设置,从深度优先的队列转到广度优先的队列。这无法解决问题。

我如何在给定的时间间隔刮擦网站,假设5分钟?

I want to scrape a website which scrapes information of given webpages every 5 minutes. I implemented this by adding a sleep time of 5 minutes in between a recursive callback, like so:

    def _parse(self, response):

        status_loader = ItemLoader(Status())

        # perform parsing        

        yield status_loader.load_item()


        time.sleep(5)
        yield scrapy.Request(response._url,callback=self._parse,dont_filter=True,meta=response.meta)

However, adding time.sleep(5) to the scraper seems to mess with the inner workings of scrapy. For some reason scrapy does send out the request, but the yield items are not (or rarely) outputted to the given output file.

I was thinking it has to do with the request prioritization of scrapy, which might prioritize sending a new request over yielding the scraped items. Could this be the case? I tried to edit the settings to go from a depth-first queue to a breadth-first queue. This did not solve the problem.

How would I go about scraping a website at a given interval, let's say 5 minutes?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

七分※倦醒 2025-02-08 22:53:19

它行不通,因为scrapy默认情况下是异步的。

尝试设置这样的玉米作业 -

import logging
import subprocess
import sys
import time

import schedule


def subprocess_cmd(command):
    process = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
    proc_stdout = process.communicate()[0].strip()
    logging.info(proc_stdout)


def cron_run_win():
    # print('start scraping... ####')
    logging.info('start scraping... ####')
    subprocess_cmd('scrapy crawl <spider_name>')


def cron_run_linux():
    # print('start scraping... ####')
    logging.info('start scraping... ####') 
    subprocess_cmd('scrapy crawl <spider_name>')


def cron_run():
    if 'win' in sys.platform:
        cron_run_win()
        schedule.every(5).minutes.do(cron_run_win)

    elif 'linux' in sys.platform:
        cron_run_linux()
        schedule.every(5).minutes.do(cron_run_linux)

    while True:
        schedule.run_pending()
        time.sleep(1)


cron_run()

根据您正在使用的OS,这将每5分钟运行您所需的蜘蛛

It won't work because Scrapy is asynchronous by default.

Try to set a corn job like this instead -

import logging
import subprocess
import sys
import time

import schedule


def subprocess_cmd(command):
    process = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
    proc_stdout = process.communicate()[0].strip()
    logging.info(proc_stdout)


def cron_run_win():
    # print('start scraping... ####')
    logging.info('start scraping... ####')
    subprocess_cmd('scrapy crawl <spider_name>')


def cron_run_linux():
    # print('start scraping... ####')
    logging.info('start scraping... ####') 
    subprocess_cmd('scrapy crawl <spider_name>')


def cron_run():
    if 'win' in sys.platform:
        cron_run_win()
        schedule.every(5).minutes.do(cron_run_win)

    elif 'linux' in sys.platform:
        cron_run_linux()
        schedule.every(5).minutes.do(cron_run_linux)

    while True:
        schedule.run_pending()
        time.sleep(1)


cron_run()

This will run your desired spider every 5 mins depending on the os you are using

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文