dockerize fastapi+射线 - 集装箱运行后的射线停止

发布于 2025-02-09 15:58:35 字数 2909 浏览 0 评论 0 原文

我试图将这个应用程序fastapi与“射线”结合使用,该问题是在Dockerfile完成建筑物后停止的问题。请检查下面的附件。

在这里输入映像

​A>

Dockerfile:

from Python:3.10.5

WorkDir / App /

Enk PythondontwriteByTecode 1

Env Pythonunbuffered 1

Env Pythondontwritebytecode 1

EnviNV_VENV_VENV_VENV_IN_IN_PROJECT = 1

复制。 。

运行/usr/local/bin/python -m pip install -upgrade pip

pipenv

run pip3 install pip3 install'ray [server]

pipfile

run pip3 install

添加

。 。

公开8888 公开6379 公开8265

env nlp_host = 0.0.0.0

env nlp_port = 8888

entrypoint [“ ray”,“ start”,“ - head”] cmd [

成功构建了196CD1CA73CE 成功标记的NLP:最新 创建NLP ...完成 附着在NLP上 NLP | 2022-06-22 21:26:59,888 INFO USAGE_LIB.PY:320-默认情况下启用了使用统计信息,而无需用户确认,因为该Stdin被检测到非相互交互。要禁用此功能,请在启动群集的命令中添加 - 禁用 - usage-stats 。请参阅 https://docs.rayre.io/en/en/en/master/master/cluster /usage-stats.html 有关更多详细信息。 NLP | 2022-06-22 21:26:59,888 Info Scripts.py:715-本地节点IP:172.26.0.2 NLP | 2022-06-22 21:27:01,942 info services.py:1470-查看射线仪表板,at NLP | 2022-06-22 21:27:01,945警告服务。这会损害性能!您可以通过删除 /dev /shm中的文件来释放空间。如果您在Docker容器中,则可以通过传递“ -shm-size = 4.91GB”来增加 /dev /shm大小,以“ docker run”(或将其添加到射线群集配置中的Run_options列表中)。确保将其设置为可用的RAM的30%以上。 NLP | 2022-06-22 21:27:02,261 succ Scripts.py:757 - ----------------------------------------------------- NLP | 2022-06-22 21:27:02,261 SUCC SCRICKS.PY:758-雷运行时开始。 NLP | 2022-06-22 21:27:02,261 succ Scripts.py:759 - -------------------------------------- NLP | 2022-06-22 21:27:02,262信息scripts.py:761-下一步 NLP | 2022-06-22 21:27:02,262 info scripts.py:762-要从另一个节点连接到此射线运行时,请运行 NLP | 2022-06-22 21:27:02,262信息脚本。 NLP | 2022-06-22 21:27:02,262信息脚本。 NLP | 2022-06-22 21:27:02,262 INFO scripts.py:772-导入射线 NLP | 2022-06-22 21:27:02,262 info scripts.py:776-ray.init(地址='auto') NLP | 2022-06-22 21:27:02,262 info scripts.py:788-例如,从集群外部连接到此射线运行时,例如 NLP | 2022-06-22 21:27:02,262 info scripts.py:792-直接从笔记本电脑连接到远程群集,使用以下 NLP | 2022-06-22 21:27:02,262 Info Scripts.py:796-Python代码: NLP | 2022-06-22 21:27:02,262信息脚本S.Py:798-导入射线 NLP | 2022-06-22 21:27:02,263 INFO scripts.py:799-ray.init(地址='ray://< head_node_ip_ip_address>:10001') NLP | 2022-06-22 21:27:02,263 INFO scripts.py:808-如果连接失败,请检查您的防火墙设置和网络配置。 NLP | 2022-06-22 21:27:02,263 info scripts.py:816-终止射线运行时,运行 NLP | 2022-06-22 21:27:02,263 Info Scripts.py:817-Ray Stop NLP用代码0

退出

I trying to dockerize this app FastAPI combine with "Ray", the issue is Ray instance stopped after the Dockerfile finished the building. kindly check the attached below.

enter image description here

enter image description here

enter image description here

Dockerfile:

FROM python:3.10.5

WORKDIR /app/

ENV PYTHONDONTWRITEBYTECODE 1

ENV PYTHONUNBUFFERED 1

ENV PYTHONDONTWRITEBYTECODE 1

ENV PIPENV_VENV_IN_PROJECT=1

COPY . .

RUN /usr/local/bin/python -m pip install --upgrade pip

RUN pip3 install pipenv

RUN pip3 install "ray[serve]"

ADD Pipfile.* /app/

RUN pipenv install --dev

WORKDIR /app/

COPY . .

EXPOSE 8888
EXPOSE 6379
EXPOSE 8265

ENV NLP_HOST=0.0.0.0

ENV NLP_PORT=8888

ENTRYPOINT [ "ray", "start", "--head" ]

CMD [ ".venv/bin/python", "serve_with_fastapi.py" ]

Successfully built 196cd1ca73ce
Successfully tagged nlp:latest
Creating nlp ... done
Attaching to nlp
nlp | 2022-06-22 21:26:59,888 INFO usage_lib.py:320 -- Usage stats collection is enabled by default without user confirmation because this stdin is detected to be non-interactively. To disable this, add --disable-usage-stats to the command that starts the cluster, or run the following command: ray disable-usage-stats before starting the cluster. See https://docs.ray.io/en/master/cluster/usage-stats.html for more details.
nlp | 2022-06-22 21:26:59,888 INFO scripts.py:715 -- Local node IP: 172.26.0.2
nlp | 2022-06-22 21:27:01,942 INFO services.py:1470 -- View the Ray dashboard at http://127.0.0.1:8265
nlp | 2022-06-22 21:27:01,945 WARNING services.py:2002 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 67108864 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=4.91gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
nlp | 2022-06-22 21:27:02,261 SUCC scripts.py:757 -- --------------------
nlp | 2022-06-22 21:27:02,261 SUCC scripts.py:758 -- Ray runtime started.
nlp | 2022-06-22 21:27:02,261 SUCC scripts.py:759 -- --------------------
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:761 -- Next steps
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:762 -- To connect to this Ray runtime from another node, run
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:765 -- ray start --address='172.26.0.2:6379'
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:770 -- Alternatively, use the following Python code:
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:772 -- import ray
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:776 -- ray.init(address='auto')
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:788 -- To connect to this Ray runtime from outside of the cluster, for example to
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:792 -- connect to a remote cluster from your laptop directly, use the following
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:796 -- Python code:
nlp | 2022-06-22 21:27:02,262 INFO scripts.py:798 -- import ray
nlp | 2022-06-22 21:27:02,263 INFO scripts.py:799 -- ray.init(address='ray://<head_node_ip_address>:10001')
nlp | 2022-06-22 21:27:02,263 INFO scripts.py:808 -- If connection fails, check your firewall settings and network configuration.
nlp | 2022-06-22 21:27:02,263 INFO scripts.py:816 -- To terminate the Ray runtime, run
nlp | 2022-06-22 21:27:02,263 INFO scripts.py:817 -- ray stop
nlp exited with code 0

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文