可以找到安装在自定义Docker Image中的Python软件包
我正在创建一个运行Python 3.6.15的Docker容器,并且在构建过程中我的Dockerfile中的PIP安装功能运行,但是当我尝试在构建完成后尝试执行其中的功能,并且我运行了“已安装”套件,则不存在。
有关更多上下文,这是我的Dockerfile。为了清楚起见,我正在构建一个docker容器,该容器正在上传到AWS ECR,以在lambda功能中使用,但我认为这与这个问题完全没有关系(虽然适合上下文):
# Define function directory
ARG FUNCTION_DIR="/function"
FROM python:3.6 as build-image
# Install aws-lambda-cpp build dependencies
RUN apt-get clean && apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev \
ffmpeg
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy function code
COPY . ${FUNCTION_DIR}
# Install the runtime interface client
RUN /usr/local/bin/python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
# Install the runtime interface client
COPY requirements.txt /requirements.txt
RUN /usr/local/bin/python -m pip install -r requirements.txt
# Multi-stage build: grab a fresh copy of the base image
FROM python:3.6
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY entry-point.sh /entry_script.sh
ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie
ENTRYPOINT [ "/entry_script.sh" ]
CMD [ "app.handler" ]
当我运行Docker时在终端中运行
命令,我可以看到它正在收集和安装我项目根源中的unignts.txt文件中的软件包。然后,我尝试运行获取导入模块错误。要进行故障排除,我运行了一些命令行exec
函数,例如:
docker exec <container-id> bash -c "ls" # This returns the folder structure which looks great
docker exec <container-id> bash -c "pip freeze". # This only returns 'pip', 'wheel' and some other basic Python modules
唯一可以解决的原因是,在构建和运行它之后,我运行此命令:
docker exec <container-id> bash -c "/usr/local/bin/python -m pip install -r requirements.txt"
它手动安装模块,然后它们显示在freeze
命令中上升,我可以执行代码。这不是理想的选择,因为我想在构建过程中正确运行pip install
,因此将来更改代码时会有更少的步骤。
关于我出错的任何指示,都将很棒,谢谢!
I am creating a Docker container that runs Python 3.6.15 and the pip install function in my Dockerfile runs during the build process but when I try to execute functions within it after the build completes and I run it the 'installed' packages do not exist.
For more context, here is my Dockerfile. For clarity, I am building a Docker container that is being uploaded to AWS ECR to be used in a Lambda function but I don't think that's entirely relevant to this question (good for context though):
# Define function directory
ARG FUNCTION_DIR="/function"
FROM python:3.6 as build-image
# Install aws-lambda-cpp build dependencies
RUN apt-get clean && apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev \
ffmpeg
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy function code
COPY . ${FUNCTION_DIR}
# Install the runtime interface client
RUN /usr/local/bin/python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
# Install the runtime interface client
COPY requirements.txt /requirements.txt
RUN /usr/local/bin/python -m pip install -r requirements.txt
# Multi-stage build: grab a fresh copy of the base image
FROM python:3.6
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
COPY entry-point.sh /entry_script.sh
ADD aws-lambda-rie /usr/local/bin/aws-lambda-rie
ENTRYPOINT [ "/entry_script.sh" ]
CMD [ "app.handler" ]
When I run my docker run
command in Terminal, I can see that it is collecting and installing the packages from the requirements.txt file that is in my project's root. I then try to run an get an Import Module error. To troubleshoot, I ran some command line exec
functions such as:
docker exec <container-id> bash -c "ls" # This returns the folder structure which looks great
docker exec <container-id> bash -c "pip freeze". # This only returns 'pip', 'wheel' and some other basic Python modules
The only why I could solve it is that after I build and run it, I run this command:
docker exec <container-id> bash -c "/usr/local/bin/python -m pip install -r requirements.txt"
Which manually installs the modules and they then show up in the freeze
command and I can execute the code. This is not ideal as I would like to have pip install
run correctly during the build process so there are less steps in the future as I make changes to the code.
Any pointers as to where I am going wrong would be great, thank you!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
根据 docker docs,多阶段构建
因此,dockerfile中的第二个
摘自Python:3.6
重置图像构建,删除了模块安装。随后的副本保存了
/function
(AWS模块)中的内容,但不能保存在其他PIP安装中保存到系统的其他模块。According to Docker Docs, multi-stage builds
So the 2nd
from python:3.6
in the Dockerfile resets the image build, deleting the module installations.The subsequent copy saves what was in
/function
(the aws module) but not the other modules saved to the system in the other pip install.