gearman 中的错误情况和重试?
有人可以指导我 gearman 在出现异常时如何重试吗 抛出或发生错误时?
我在 Django 应用程序中使用 python gearman 客户端,我的工作人员是 作为 Django 命令启动。我从这篇重试的博客文章中读到 从错误条件来看并不简单,它需要 sys.exit 从工作端。
是否已修复此问题以使用 sendFail 或 sendException 重试? gearman 还支持使用指数算法重试 – 比如说 SMTP 失败是否会在 2、4、8、16 秒等后重试?
Can someone guide me on how gearman does retries when exceptions are
thrown or when errors occur?
I use the python gearman client in a Django app and my workers are
initiated as a Django command. I read from this blog post that retries
from error conditions are not straight forward and that it requires
sys.exit from the worker side.
Has this been fixed to retry perhaps with sendFail or sendException?
Also does gearman support retries with exponentials algorithm – say if
an SMTP failure happens its retries after 2,4,8,16 seconds etc?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
据我了解,Gearman 采用了一种非常“这不关我的事”的方法 - 例如,它不会干预正在执行的工作,除非工人崩溃。任何成功/失败消息都应该由客户端处理,而不是 Gearman 服务器本身。
在前台作业中,这意味着所有
sendFail()
/sendException()
和其他send*()
都定向到客户端,并且它已启动让客户决定是否重试该作业。这是有道理的,因为有时您可能不需要重试。在后台作业中,所有
send*()
函数都失去了意义,因为没有客户端会监听回调。结果,发送的消息将被 Gearman 忽略。重试作业的唯一条件是工作线程崩溃时(可以使用exit(XX)
命令模拟,其中XX
是非零值)。当然,这不是您想要做的事情,因为工作线程通常应该是长时间运行的进程,而不是在每个不成功的作业后都必须重新启动的进程。就我个人而言,我通过扩展默认的 GearmanJob 类解决了这个问题,我在其中拦截对
send*()
函数的调用,然后自己实现重试机制。本质上,我将所有与重试相关的数据(最大重试次数、已重试次数)与工作负载一起传递,然后自己处理所有事情。这有点麻烦,但我理解为什么 Gearman 以这种方式工作 - 它只是允许您处理所有应用程序逻辑。最后,关于以指数超时(或与此相关的任何超时)重试作业的能力。 Gearman 具有添加延迟作业的功能(在协议文档中查找
SUBMIT_JOB_EPOCH
) ,但我不确定它的状态 - PHP 扩展和我认为 Python 模块不支持它,并且文档说它可以在将来删除。但我知道它目前有效 - 您只需向 Gearman 提交原始套接字请求即可实现(指数部分也应该在您这边实现)。但是,这篇博文认为 SUBMIT_JOB_EPOCH 实现不能很好地扩展。他使用 node.js 和 setTimeout() 来使其工作,我见过其他人使用 unix 实用程序 at 来执行相同的操作。无论如何 - Gearman 不会为你做这件事。它将关注可靠性,但会让您专注于所有逻辑。
To my understanding, Gearman employs a very "it's not my business" approach - e.g., it does not intervene with jobs performed, unless workers crash. Any success / failure messages are supposed to be handled by the client, not Gearman server itself.
In foreground jobs, this implies that all
sendFail()
/sendException()
and othersend*()
are directed to the client and it's up to the client to decide whether to retry the job or not. This makes sense as sometimes you might not need to retry.In background jobs, all the
send*()
functions lose their meaning, as there is no client that would be listening to the callbacks. As a result, the messages sent will be just ignored by Gearman. The only condition on which the job will be retried is when the worker crashes (which can by emulated with aexit(XX)
command, whereXX
is a non-zero value). This, of course, is not something you want to do, because workers are usually supposed to be long-running processes, not the ones that have to be restarted after each unsuccessful job.Personally, I have solved this problem by extending the default GearmanJob class, where I intercept the calls to
send*()
functions and then implementing the retry mechanism myself. Essentially, I pass all the retry-related data (max number of retries, times already retried) together with a workload and then handle everything myself. It is a bit cumbersome, but I understand why Gearman works this way - it just allows you to handle all the application logic.Finally, regarding the ability to retry jobs with exponential timeout (or any timeout for that matter). Gearman has a feature to add delayed jobs (look for
SUBMIT_JOB_EPOCH
in the protocol documentation), yet I am not sure about its status - the PHP extension and, I think, the Python module do not support it and the docs say it can be removed in the future. But I understand it works at the moment - you just need to submit raw socket requests to Gearman to make it happen (and the exponential part should be implemented on your side, too).However, this blog post argues that SUBMIT_JOB_EPOCH implementation does not scale well. He uses node.js and
setTimeout()
to make it work, I've seen others use the unix utilityat
to do the same. In any way - Gearman will not do it for you. It will focus on reliability, but will let you focus on all the logic.