Celery 任务无法在 Flask docker 容器中使用 Redis

发布于 2025-01-14 03:49:02 字数 3645 浏览 1 评论 0原文

我正在尝试在 Flask docker 容器中运行 celery 任务,当执行 celery 任务时,我收到如下错误

web_1     |     sock.connect(socket_address)
web_1     | OSError: [Errno 99] Cannot assign requested address
web_1     | 
web_1     | During handling of the above exception, another exception occurred: **[shown below]**

web_1     |   File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1     |     raise ConnectionError(self._error_message(e))
web_1     | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.

没有 celery 任务,应用程序工作正常

docker-compose.yml

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./app:/app
    ports:
      - "80:80"
    environment:
      - FLASK_APP=app/main.py
      - FLASK_DEBUG=1
      - 'RUN=flask run --host=0.0.0.0 --port=80'
    depends_on:
      - redis

  redis:
    container_name: redis
    image: redis:6.2.6
    ports:
      - "6379:6379"
    expose: 
      - "6379"

  worker:
    build:
      context: ./
    hostname: worker
    command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
    volumes:
      - ./app:/app
    links:
      - redis
    depends_on:
      - redis

ma​​in .py

from flask import Flask
from instance import config, exts
from decouple import config as con

def create_app(config_class=config.Config):
    app = Flask(__name__)
    app.config.from_object(config.Config)
    app.secret_key = con('flask_secret_key')
    
    exts.mail.init_app(app)

    from routes.test_route import test_api
    app.register_blueprint(test_api)
    return app

app = create_app()

if __name__ == "__main__":
    app.run(host="0.0.0.0", debug=True, port=80)

我使用 Flask 蓝图来分割 api 路由

test_route.py

from flask import Flask, render_template, Blueprint
from instance.exts import celery

test_api = Blueprint('test_api', __name__)


@test_api.route('/test/<string:name>')
def testfnn(name):
    task = celery.send_task('CeleryTask.reverse',args=[name])
    return task.id

Celery 任务也写在单独的文件中

celery_tasks.py

from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time

celery= Celery('tasks', 
                broker = config('CELERY_BROKER_URL'), 
                backend = config('CELERY_RESULT_BACKEND'))

class CeleryTask:
    @celery.task(name='CeleryTask.reverse')
    def reverse(string):
        time.sleep(25)
        return string[::-1]

.env< /强>

CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'

Dockerfile

FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]

requirements.txt

Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32 

文件夹结构

在此处输入图像描述

提前致谢

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed

web_1     |     sock.connect(socket_address)
web_1     | OSError: [Errno 99] Cannot assign requested address
web_1     | 
web_1     | During handling of the above exception, another exception occurred: **[shown below]**

web_1     |   File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1     |     raise ConnectionError(self._error_message(e))
web_1     | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.

Without the celery task the application is working fine

docker-compose.yml

version: '3'
services:
  web:
    build: ./
    volumes:
      - ./app:/app
    ports:
      - "80:80"
    environment:
      - FLASK_APP=app/main.py
      - FLASK_DEBUG=1
      - 'RUN=flask run --host=0.0.0.0 --port=80'
    depends_on:
      - redis

  redis:
    container_name: redis
    image: redis:6.2.6
    ports:
      - "6379:6379"
    expose: 
      - "6379"

  worker:
    build:
      context: ./
    hostname: worker
    command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
    volumes:
      - ./app:/app
    links:
      - redis
    depends_on:
      - redis

main.py

from flask import Flask
from instance import config, exts
from decouple import config as con

def create_app(config_class=config.Config):
    app = Flask(__name__)
    app.config.from_object(config.Config)
    app.secret_key = con('flask_secret_key')
    
    exts.mail.init_app(app)

    from routes.test_route import test_api
    app.register_blueprint(test_api)
    return app

app = create_app()

if __name__ == "__main__":
    app.run(host="0.0.0.0", debug=True, port=80)

I am using Flask blueprint for splitting the api routes

test_route.py

from flask import Flask, render_template, Blueprint
from instance.exts import celery

test_api = Blueprint('test_api', __name__)


@test_api.route('/test/<string:name>')
def testfnn(name):
    task = celery.send_task('CeleryTask.reverse',args=[name])
    return task.id

Celery tasks are also written in separate file

celery_tasks.py

from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time

celery= Celery('tasks', 
                broker = config('CELERY_BROKER_URL'), 
                backend = config('CELERY_RESULT_BACKEND'))

class CeleryTask:
    @celery.task(name='CeleryTask.reverse')
    def reverse(string):
        time.sleep(25)
        return string[::-1]

.env

CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'

Dockerfile

FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]

requirements.txt

Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32 

Folder Structure

enter image description here

Thanks in Advance

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

动次打次papapa 2025-01-21 03:49:02

在 docker-compose.yml 的末尾,您可以添加:

networks:
  your_net_name:
    name: your_net_name

在每个容器中:

networks:
      - your_net_name

这两个步骤会将所有容器置于同一网络中。默认情况下,docker 会创建一个,但由于我在自动重命名它们时遇到了问题,我认为这种方法可以为您提供更多控制权。

最后,我还要更改您的 env 变量以使用容器地址:

CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0

因此您还需要将此部分添加到您的 redis 容器中:

hostname: redis_addr

这样 env var 将获取 docker 分配给容器的任何地址。

In the end of your docker-compose.yml you can add:

networks:
  your_net_name:
    name: your_net_name

And in each container:

networks:
      - your_net_name

These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.

Finally I'd also change your env variable to use the container address:

CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0

So you'd also add this section to your redis container:

hostname: redis_addr

This way the env var will get whatever address docker has assigned to the container.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文