可以找到安装在自定义Docker Image中的Python软件包

发布于 2025-01-25 11:46:24 字数 2141 浏览 1 评论 0原文

我正在创建一个运行Python 3.6.15的Docker容器,并且在构建过程中我的Dockerfile中的PIP安装功能运行,但是当我尝试在构建完成后尝试执行其中的功能,并且我运行了“已安装”套件,则不存在。

有关更多上下文,这是我的Dockerfile。为了清楚起见,我正在构建一个docker容器,该容器正在上传到AWS ECR,以在lambda功能中使用,但我认为这与这个问题完全没有关系(虽然适合上下文):

# Define function directory
ARG FUNCTION_DIR="/function"

FROM python:3.6 as build-image

# Install aws-lambda-cpp build dependencies
RUN apt-get clean && apt-get update && \
  apt-get install -y \
  g++ \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev \
  ffmpeg

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}

# Copy function code
COPY . ${FUNCTION_DIR}

# Install the runtime interface client
RUN /usr/local/bin/python -m pip install \
        --target ${FUNCTION_DIR} \
        awslambdaric

# Install the runtime interface client
COPY requirements.txt /requirements.txt
RUN /usr/local/bin/python -m pip install -r requirements.txt

# Multi-stage build: grab a fresh copy of the base image
FROM python:3.6

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

COPY entry-point.sh /entry_script.sh
ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie
ENTRYPOINT [ "/entry_script.sh" ]

CMD [ "app.handler" ]

当我运行Docker时在终端中运行命令,我可以看到它正在收集和安装我项目根源中的unignts.txt文件中的软件包。然后,我尝试运行获取导入模块错误。要进行故障排除,我运行了一些命令行exec函数,例如:

docker exec <container-id> bash -c "ls"  # This returns the folder structure which looks great

docker exec <container-id> bash -c "pip freeze". # This only returns 'pip', 'wheel' and some other basic Python modules

唯一可以解决的原因是,在构建和运行它之后,我运行此命令:

docker exec <container-id> bash -c "/usr/local/bin/python -m pip install -r requirements.txt"

它手动安装模块,然后它们显示在freeze命令中上升,我可以执行代码。这不是理想的选择,因为我想在构建过程中正确运行pip install,因此将来更改代码时会有更少的步骤。

关于我出错的任何指示,都将很棒,谢谢!

I am creating a Docker container that runs Python 3.6.15 and the pip install function in my Dockerfile runs during the build process but when I try to execute functions within it after the build completes and I run it the 'installed' packages do not exist.

For more context, here is my Dockerfile. For clarity, I am building a Docker container that is being uploaded to AWS ECR to be used in a Lambda function but I don't think that's entirely relevant to this question (good for context though):

# Define function directory
ARG FUNCTION_DIR="/function"

FROM python:3.6 as build-image

# Install aws-lambda-cpp build dependencies
RUN apt-get clean && apt-get update && \
  apt-get install -y \
  g++ \
  make \
  cmake \
  unzip \
  libcurl4-openssl-dev \
  ffmpeg

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}

# Copy function code
COPY . ${FUNCTION_DIR}

# Install the runtime interface client
RUN /usr/local/bin/python -m pip install \
        --target ${FUNCTION_DIR} \
        awslambdaric

# Install the runtime interface client
COPY requirements.txt /requirements.txt
RUN /usr/local/bin/python -m pip install -r requirements.txt

# Multi-stage build: grab a fresh copy of the base image
FROM python:3.6

# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}

# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

COPY entry-point.sh /entry_script.sh
ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie
ENTRYPOINT [ "/entry_script.sh" ]

CMD [ "app.handler" ]

When I run my docker run command in Terminal, I can see that it is collecting and installing the packages from the requirements.txt file that is in my project's root. I then try to run an get an Import Module error. To troubleshoot, I ran some command line exec functions such as:

docker exec <container-id> bash -c "ls"  # This returns the folder structure which looks great

docker exec <container-id> bash -c "pip freeze". # This only returns 'pip', 'wheel' and some other basic Python modules

The only why I could solve it is that after I build and run it, I run this command:

docker exec <container-id> bash -c "/usr/local/bin/python -m pip install -r requirements.txt"

Which manually installs the modules and they then show up in the freeze command and I can execute the code. This is not ideal as I would like to have pip install run correctly during the build process so there are less steps in the future as I make changes to the code.

Any pointers as to where I am going wrong would be great, thank you!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

空心↖ 2025-02-01 11:46:24

根据 docker docs,多阶段构建

有了多阶段构建,您可以使用多个语句中的多个语句
Dockerfile。每个指令都可以使用不同的基础,每个基础
他们中的开始了一个新的构建阶段。您可以选择性复制
从一个阶段到另一个阶段的文物,留下了你的一切
不想在最终图像中。

因此,dockerfile中的第二个摘自Python:3.6重置图像构建,删除了模块安装。

随后的副本保存了/function(AWS模块)中的内容,但不能保存在其他PIP安装中保存到系统的其他模块。

According to Docker Docs, multi-stage builds

With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image.

So the 2nd from python:3.6 in the Dockerfile resets the image build, deleting the module installations.

The subsequent copy saves what was in /function (the aws module) but not the other modules saved to the system in the other pip install.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文