Systemd-耐药芹菜 - 在步骤Chdir产卵 /bin /sh处发生故障:没有这样的文件或目录
这基本上是与 celery docs 告诉您要告诉您要告诉您用作基本初学者文件。
使用以下配置,JournalCtl -EX显示错误“在step chdir sapawning /bin /sh处失败:没有这样的文件或目录”。
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=apache
Group=apache
#Environment=PATH=/opt/python39/lib:/home/ec2-user/DjangoProjects/myproj
#Environment=PATH=/home/ec2-user/DjangoProjects/myproj
EnvironmentFile=/etc/conf.d/celery
#WorkingDirectory=/opt/python39
WorkingDirectory=/home/ec2-usuer/DjangoProjects/myproj
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/home/ec2-user/.local/bin/celery"
CELERY_BIN="/opt/python39/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
CELERYD_CHDIR="/home/ec2-user/DjangoProjects/myproj"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="myproj"
CELERY_APP="myproj.celery_tasks"
#CELERY_APP="myproj.celery_tasks:myapp"
# ^^ ??? confusion ??? ^^
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
export DJANGO_SETTINGS_MODULE="myproj.settings"
如果我在服务文件中遗漏了WorkingDirectory :没有名为“ myproj'”的模块。
过去两天我一直在研究不同的配置,什么没有,我无法克服这两个错误之一。我想念什么?
This is basically the same service file that the celery docs tells you to use as a basic beginners file.
With the below configuration, journalctl -ex displays the error "Failed at step CHDIR spawning /bin/sh: No such file or directory".
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=apache
Group=apache
#Environment=PATH=/opt/python39/lib:/home/ec2-user/DjangoProjects/myproj
#Environment=PATH=/home/ec2-user/DjangoProjects/myproj
EnvironmentFile=/etc/conf.d/celery
#WorkingDirectory=/opt/python39
WorkingDirectory=/home/ec2-usuer/DjangoProjects/myproj
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
/etc/conf.d/celery
# Name of nodes to start
# here we have a single node
#CELERYD_NODES="w1"
# or we could have three nodes:
CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/home/ec2-user/.local/bin/celery"
CELERY_BIN="/opt/python39/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
CELERYD_CHDIR="/home/ec2-user/DjangoProjects/myproj"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="myproj"
CELERY_APP="myproj.celery_tasks"
#CELERY_APP="myproj.celery_tasks:myapp"
# ^^ ??? confusion ??? ^^
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
# you may wish to add these options for Celery Beat
CELERYBEAT_PID_FILE="/var/run/celery/beat.pid"
CELERYBEAT_LOG_FILE="/var/log/celery/beat.log"
export DJANGO_SETTINGS_MODULE="myproj.settings"
If I leave out the WorkingDirectory in the service file, it throws this error: "ModuleNotFoundError: No module named 'myproj'".
I've spent the last 2 days looking at different configurations and what not, and I haven't been able to get past one of these 2 errors. What am I missing?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我能够解决它!
我发现此链接那是一个教程...它说WorkingDirectory和Celeryd_chdir是相同的。
我还阅读了一些文章,因此建议使用虚拟环境...所以我也做到了:)。
更新的文件:
/etc/systemd/system/celery.service
/etc/conf.d/celery
在我创建
/var/var/run/run/celery/< /code>和
/var/log/log/celery/
文件夹,我运行了chmod,并访问了将运行服务的用户和组访问这些文件夹 - apache。I was able to solve it!
I found this link that was a tutorial... which said that WorkingDirectory and CELERYD_CHDIR are the same.
I also read something on SO that suggested using a virtual environment... so I did that, too :).
The updated files:
/etc/systemd/system/celery.service
/etc/conf.d/celery
After I created
/var/run/celery/
and/var/log/celery/
folders, I ran chmod and gave access to the user and group that would be running the service to those folders - apache.