芹菜任务未在Docker-Compose中运行

发布于 2025-02-06 10:42:27 字数 3161 浏览 2 评论 0原文

我有一个码头组合,那里有三个组成部分:应用程序,芹菜和雷迪斯。这些是在djangorest中实现的。

我已经在Stackoverflow上几次看到了这个问题,并尝试了列出的所有解决方案。但是,芹菜任务没有运行。

芹菜具有的行为与应用程序相同,也就是说,它正在启动Django项目,但是它没有运行任务。

docker-compose.yml

version: "3.8"
services:
  app:
    build: .
    volumes:
      - .:/django
    ports:
      - 8000:8000
    image: app:django
    container_name: myapp
    command: python manage.py runserver 0.0.0.0:8000
    depends_on:
      - redis
  redis:
    image: redis:alpine
    container_name: redis
    ports:
      - 6379:6379
    volumes:
      - ./redis/data:/data
    restart: always
    environment:
      - REDIS_PASSWORD=
    healthcheck:
      test: redis-cli ping
      interval: 1s
      timeout: 3s
      retries: 30

  celery:
    image: celery:3.1
    container_name: celery
    restart: unless-stopped
    build:
      context: .
      dockerfile: Dockerfile
    command: celery -A myapp worker -l INFO -c 8
    volumes:
      - .:/django
    depends_on:
      - redis
      - app
    links:
      - redis

dockerfile

FROM python:3.9

RUN useradd --create-home --shell /bin/bash django
USER django

ENV DockerHOME=/home/django

RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1

USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable

USER django
WORKDIR /home/django
COPY requirements.txt ./

# set path
ENV PATH=/home/django/.local/bin:$PATH

# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .

EXPOSE 8000

# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]

docker-entrypoint.sh

# run migration first
python manage.py migrate

# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell

# start the server
python manage.py runserver 0.0.0.0:8000

celery.py

from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

settings.py

CELERY_BROKER_URL     = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'

I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.

I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.

The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.

docker-compose.yml

version: "3.8"
services:
  app:
    build: .
    volumes:
      - .:/django
    ports:
      - 8000:8000
    image: app:django
    container_name: myapp
    command: python manage.py runserver 0.0.0.0:8000
    depends_on:
      - redis
  redis:
    image: redis:alpine
    container_name: redis
    ports:
      - 6379:6379
    volumes:
      - ./redis/data:/data
    restart: always
    environment:
      - REDIS_PASSWORD=
    healthcheck:
      test: redis-cli ping
      interval: 1s
      timeout: 3s
      retries: 30

  celery:
    image: celery:3.1
    container_name: celery
    restart: unless-stopped
    build:
      context: .
      dockerfile: Dockerfile
    command: celery -A myapp worker -l INFO -c 8
    volumes:
      - .:/django
    depends_on:
      - redis
      - app
    links:
      - redis

DockerFile

FROM python:3.9

RUN useradd --create-home --shell /bin/bash django
USER django

ENV DockerHOME=/home/django

RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1

USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable

USER django
WORKDIR /home/django
COPY requirements.txt ./

# set path
ENV PATH=/home/django/.local/bin:$PATH

# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .

EXPOSE 8000

# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]

docker-entrypoint.sh

# run migration first
python manage.py migrate

# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell

# start the server
python manage.py runserver 0.0.0.0:8000

celery.py

from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

settings.py

CELERY_BROKER_URL     = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

红尘作伴 2025-02-13 10:42:27

您的docker-entrypoint.sh脚本无条件运行Django服务器。由于您将其声明为图像的entrypoint,因此compose 命令: is 作为参数传递给它,但您的脚本忽略了这些。

解决此问题的最佳方法是传递特定命令 - “运行django服务器”,“运行芹菜工人” - 作为dockerfile cmd或组合命令:。输入点脚本以shell命令exec“ $@”以运行该命令结束。

#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell

# run the container CMD
exec "$@"

在您的Dockerfile中,您需要声明默认的cmd

ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000

现在,在您的组合设置中,如果未指定命令:,它将使用该默认cmd,但是如果您这样做,则将运行。在这两种情况下,您的入口点脚本都将运行,但是当它到达最终exec“ $@”行时,它将运行提供的命令。

这意味着您可以删除命令:app容器中覆盖。 (您确实需要将其留给芹菜容器。)您可以通过删除image:container_name: settings(撰写将选择合理的默认值)来进一步简化此设置。其中)和卷:隐藏图像内容的安装。

Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.

The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$@" to run that command.

#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell

# run the container CMD
exec "$@"

In your Dockerfile you need to declare a default CMD.

ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000

Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$@" line it will run the provided command.

That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文