芹菜和芹菜的问题Redis后端

发布于 2025-01-03 21:37:04 字数 2746 浏览 6 评论 0原文

我当前设置了一个使用 celery 和 redis 的系统 后端执行一系列异步任务,例如发送电子邮件, 拉取社交数据、爬行等。一切都很顺利,但我 我正在组织小组研究如何监控系统(又名数字 队列消息)。我开始寻找芹菜来源但是 我想我应该在这里发布我的问题: 首先,这是我的配置:

BROKER_BACKEND                  = "redis" 
BROKER_HOST                     = "localhost" 
BROKER_PORT                     = 6379 
BROKER_VHOST                    = "1" 
REDIS_CONNECT_RETRY     = True 
REDIS_HOST                              = "localhost" 
REDIS_PORT                              = 6379 
REDIS_DB                                = "0" 
CELERY_SEND_EVENTS                      = True 
CELERYD_LOG_LEVEL               = 'INFO' 
CELERY_RESULT_BACKEND           = "redis" 
CELERY_TASK_RESULT_EXPIRES      = 25 
CELERYD_CONCURRENCY             = 8 
CELERYD_MAX_TASKS_PER_CHILD = 10 
CELERY_ALWAYS_EAGER                     =True

我想做的第一件事是监视中有多少消息 我的队列。我认为,在幕后,redis 后端只是 从列表中推送/弹出,尽管我似乎找不到 代码。所以我模拟了一个模拟,开始执行大约 100 个任务 我试图在 redis 中找到它们: 我的 celeryd 运行如下: python manage.py celeryd -c 4 --loglevel=DEBUG -n XXXXX --logfile=logs/ celery.log 所以我应该同时只有 4 个并发工作人员...... 有两件事我不明白: 问题一: 当我排队100个任务并在redis上查找它们后,我只 请参阅以下内容:

$ redis-cli 
redis 127.0.0.1:6379> keys * 
1) "_kombu.binding.celery" 
redis 127.0.0.1:6379> select 1 
OK 
redis 127.0.0.1:6379[1]> keys * 
1) "_kombu.binding.celery" 
2) "_kombu.binding.celeryd.pidbox" 
redis 127.0.0.1:6379[1]>

我似乎找不到任务来获取排队的数量 (从技术上讲,应该是 96,因为我只支持 4 个并发任务)

问题 2

$ ps aux | grep celeryd | cut -c 13-120 
 41258   0.2  0.2  2526232   9440 s004  S+    2:27PM   0:07.35 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41261   0.0  0.1  2458320   2468 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 38457   0.0  0.8  2559848  34672 s004  T    12:34PM   0:18.59 python 
manage.py celeryd -c 4 --loglevel=INFO 
 38449   0.0  0.9  2517244  36752 s004  T    12:34PM   0:35.72 python 
manage.py celeryd -c 4 --loglevel=INFO 
 38443   0.0  0.2  2524136   6456 s004  T    12:34PM   0:10.15 python 
manage.py celeryd -c 4 --loglevel=INFO 
 84542   0.0  0.0  2460112      4 s000  T    27Jan12   0:00.74 python 
manage.py celeryd -c 4 --loglevel=INFO 
 84536   0.0  0.0  2506728      4 s000  T    27Jan12   0:00.51 python 
manage.py celeryd -c 4 --loglevel=INFO 
 41485   0.0  0.0  2435120    564 s000  S+    2:54PM   0:00.00 grep 
celeryd 
 41264   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41263   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41262   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 

如果有人能为我解释这一点,那就太好了。

I have a system set up currently that is using celery with a redis
backend to do a bunch of asynchronous tasks such as sending emails,
pulling social data, crawling,etc. Everything is working great, but I
am having group figuring out how to monitor the system (aka the number
of queue up messages). I started looking through the celery source but
I figured I would post my questions in here:
First off, here are my configurations:

BROKER_BACKEND                  = "redis" 
BROKER_HOST                     = "localhost" 
BROKER_PORT                     = 6379 
BROKER_VHOST                    = "1" 
REDIS_CONNECT_RETRY     = True 
REDIS_HOST                              = "localhost" 
REDIS_PORT                              = 6379 
REDIS_DB                                = "0" 
CELERY_SEND_EVENTS                      = True 
CELERYD_LOG_LEVEL               = 'INFO' 
CELERY_RESULT_BACKEND           = "redis" 
CELERY_TASK_RESULT_EXPIRES      = 25 
CELERYD_CONCURRENCY             = 8 
CELERYD_MAX_TASKS_PER_CHILD = 10 
CELERY_ALWAYS_EAGER                     =True

The first thing I am trying to do is monitor how many messages are in
my queue. I assume, behind the scenes, the redis backend is just
pushing/popping from a list, although I cannot seem to find that in
the code. So I mock up a simulation where I start about 100 tasks and
am trying to find them in redis:
My celeryd is running like this:
python manage.py celeryd -c 4 --loglevel=DEBUG -n XXXXX --logfile=logs/
celery.log
So I should only have 4 concurrent workers at once .....
Two thing I do not understand:
Problem 1:
After I have queued up 100 task, and look for them on redis, I only
see the following:

$ redis-cli 
redis 127.0.0.1:6379> keys * 
1) "_kombu.binding.celery" 
redis 127.0.0.1:6379> select 1 
OK 
redis 127.0.0.1:6379[1]> keys * 
1) "_kombu.binding.celery" 
2) "_kombu.binding.celeryd.pidbox" 
redis 127.0.0.1:6379[1]>

I cannot seem to find the tasks to get a number of how many are queued
(technically, 96 should be since I only support 4 concurrent tasks)

Problem 2

$ ps aux | grep celeryd | cut -c 13-120 
 41258   0.2  0.2  2526232   9440 s004  S+    2:27PM   0:07.35 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41261   0.0  0.1  2458320   2468 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 38457   0.0  0.8  2559848  34672 s004  T    12:34PM   0:18.59 python 
manage.py celeryd -c 4 --loglevel=INFO 
 38449   0.0  0.9  2517244  36752 s004  T    12:34PM   0:35.72 python 
manage.py celeryd -c 4 --loglevel=INFO 
 38443   0.0  0.2  2524136   6456 s004  T    12:34PM   0:10.15 python 
manage.py celeryd -c 4 --loglevel=INFO 
 84542   0.0  0.0  2460112      4 s000  T    27Jan12   0:00.74 python 
manage.py celeryd -c 4 --loglevel=INFO 
 84536   0.0  0.0  2506728      4 s000  T    27Jan12   0:00.51 python 
manage.py celeryd -c 4 --loglevel=INFO 
 41485   0.0  0.0  2435120    564 s000  S+    2:54PM   0:00.00 grep 
celeryd 
 41264   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41263   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 
 41262   0.0  0.1  2458320   2480 s004  S+    2:27PM   0:00.09 python 
manage.py celeryd -c 4 --loglevel=DEBU 

If anyone could explain this for me, it would be great.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

为你拒绝所有暧昧 2025-01-10 21:37:04

您的配置为 CELERY_ALWAYS_EAGER = True。这意味着任务在本地运行,因此您不会在 Redis 中看到它们。来自文档:http://celery.readthedocs.org/en/latest/configuration。 html#celery-always-eager

CELERY_ALWAYS_EAGER

如果这是 True,所有任务都将被执行
通过本地阻塞直到任务返回。 apply_async() 和
Task.delay() 将返回一个 EagerResult 实例,它模拟
AsyncResult 的 API 和行为,除了结果已经
已评估。

即任务将在本地执行,而不是发送到
队列。

Your configuration has CELERY_ALWAYS_EAGER = True. This means that the tasks run locally and hence you won't see them in Redis. From the docs: http://celery.readthedocs.org/en/latest/configuration.html#celery-always-eager

CELERY_ALWAYS_EAGER

If this is True, all tasks will be executed
locally by blocking until the task returns. apply_async() and
Task.delay() will return an EagerResult instance, which emulates the
API and behavior of AsyncResult, except the result is already
evaluated.

That is, tasks will be executed locally instead of being sent to the
queue.

稀香 2025-01-10 21:37:04

从未使用过 Celery,但如果您想了解它在做什么,方法之一是使用 redis-cli 连接到 Redis 实例,然后运行 ​​monitor 命令。这将转储针对 Redis 数据库执行的所有命令。您将能够准确地看到正在发生的事情。

Never used Celery but if you want to figure what it is doing one of the ways to go about it is to connect to the Redis instance using redis-cli then run monitor command. This will dump all commands being executed against the Redis database. You will be able to see exactly what is going on.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文