在Dockerfile中使用ML Docker映像和运行PIP安装之间的区别
我看到有许多可用的Docker图像用于流行的ML框架,例如 pytorch> pytorch href =“ https://hub.docker.com/r/tensorflow/tensorflow/” rel =“ nofollow noreferrer”> tensorflow 。
使用pip install
conda conda install 在Dockerfile中使用这些预构建的图像与安装这些库的图像与安装这些库有什么区别?
我通常会从nvidia/cuda
基本图像中构建自定义docker映像,该图像支持GPU,然后运行一个bash命令以安装我的sumplions.txt
文件,该文件包含上述提及的文件库。示例:
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
...
# Activate virtual environment and install requirements
RUN /bin/bash -c "cd src \
&& source activate my_venv \
&& pip install -r requirements.txt"
我觉得使用pip install
可以使我有更多的自由度,并允许我选择一个可以使用我喜欢的OS启用GPU-USAGE的基本图像。我想这可能与性能问题有关。
I see there are many available Docker images for popular ML frameworks such as PyTorch and Tensorflow.
What is the difference between using these pre-built images vs installing these libraries using pip install
or conda install
in the Dockerfile?
I usually build my custom Docker images from an nvidia/cuda
base image which supports GPU and later run a bash command to install my requirements.txt
file which contain the afore-mentioned libraries. Example:
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
...
# Activate virtual environment and install requirements
RUN /bin/bash -c "cd src \
&& source activate my_venv \
&& pip install -r requirements.txt"
I feel that using pip install
gives me more liberties and allows me to choose a base image that enables GPU-usage with my favorite OS. I guess that it might have to do with performance issues.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
没有区别,使用预先构建的图像可以使您免于对Docker环境或丢失的依赖关系进行错误配置并确保更安全的执行(我说Docker的全部要点)。如果您想完全控制所构建的图像,则您的方法很好。
There is no difference, using a pre-built image saves you from misconfiguring the docker environment or the missing dependencies and ensuring a safer execution (the whole point of docker I say). Your approach is fine in case you want to have a full control on the image being built.