与 Twitter 通信的 Celery Task

发布于 2024-10-22 04:02:14 字数 142 浏览 1 评论 0原文

当编写与有速率限制的服务通信的 celery 任务时,正确的方法是什么?有时会长时间丢失(不响应)?

我必须使用任务重试吗?如果服务错过太多时间怎么办?有没有办法存储这些任务以便在很长一段时间后执行?

如果这是一个长任务中的子任务怎么办?

What is the right approach when writing celery tasks that communicate with service that have a rate limits and sometimes is missing (not responding) for a long time of period?

Do I have to use task retry? What if service is missing too much time? Is there a way to store this tasks for later execution after a long time period?

What if this is a subtask in a long task?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

硪扪都還晓 2024-10-29 04:02:14

首先,我建议您设置套接字超时以避免长时间等待响应。
您可以捕获套接字 TimeOutException,并在这种特殊情况下重试较长的时间,例如 15 分钟。
无论如何,通常我使用带有增量百分比的增量重试,这会增加每次任务重试的时间,当您编写依赖于长时间可用的外部服务的任务时,这非常有用。
您可以在任务上设置较高的重试次数(例如 50),然后

#20 seconds
self.default_retry_delay = 20 

在为任务实现这样的方法后

def incrementalRetry(self, exc, perc = 20, args = None):
    """By default the retry delay is increased by 20 percent"""
    if args:
        self.request.args = args

    delay = self.default_retry_delay

    if self.request.kwargs.has_key('retry_deleay'):
        delay = self.request.kwargs['retry_deleay']

    retry_delay = delay+round((delay*perc)/100,2)
    #print "delay"+str(retry_delay)

    self.retry(self.request.args,
               self.request.kwargs.update({'retry_deleay':retry_delay}),
               exc=exc,countdown=retry_delay, max_retries=self.max_retries)

使用 var 设置标准重试时间

如果这是一个长任务中的子任务怎么办?

如果您不需要结果,可以使用 task.delay(args=[]) 以异步模式启动它
任务组也是一个不错的功能,它允许您启动不同的任务,完成所有任务后,您可以执行工作流程中的其他任务。

First, i suggest you to set a socket timeout for avoid long waiting of a response.
Than you can catch the socket TimeOutException and in this particoular case to a retry with big amount of time, like 15 minutes.
Anyway normally I use an incrementalRetry with a percentage of increment, this will increase the time every time the task retry, this is useful when you write task that depends on external services that can be available for long time.
You can set on task an high number of retry like 50 and than set the standard retry time by using the var

#20 seconds
self.default_retry_delay = 20 

after you can implement a method like this for your task

def incrementalRetry(self, exc, perc = 20, args = None):
    """By default the retry delay is increased by 20 percent"""
    if args:
        self.request.args = args

    delay = self.default_retry_delay

    if self.request.kwargs.has_key('retry_deleay'):
        delay = self.request.kwargs['retry_deleay']

    retry_delay = delay+round((delay*perc)/100,2)
    #print "delay"+str(retry_delay)

    self.retry(self.request.args,
               self.request.kwargs.update({'retry_deleay':retry_delay}),
               exc=exc,countdown=retry_delay, max_retries=self.max_retries)

What if this is a subtask in a long task?

If you don't need the result you can launch it in async mode by using task.delay(args=[])
A nice feature is also the task group that allow you to launch different tasks and after all are finished you can to something else in you work flow.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文