在Docker Build中使用缓存进行CMAKE构建过程
我需要Docker Image中的一个应用程序,该应用程序需要一些必须从源构建的库的特定版本。
因此,我在Docker构建过程中构建它。
问题是,花费了这么长时间(约30分钟)。
我想知道,如果下次进行构建过程,是否可以将其保存到缓存层并跳过它。
这是Dockerfile代码的关键部分:
ADD https://sqlite.org/2022/sqlite-autoconf-3380200.tar.gz sqlite-autoconf-3380200.tar.gz
RUN tar -xvzf sqlite-autoconf-3380200.tar.gz
WORKDIR sqlite-autoconf-3380200
RUN ./configure
RUN make
RUN make install
WORKDIR /tmp
ADD https://download.osgeo.org/proj/proj-9.0.0.tar.gz proj-9.0.0.tar.gz
RUN tar -xvzf proj-9.0.0.tar.gz
WORKDIR proj-9.0.0
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN cmake --build .
RUN cmake --build . --target install
RUN projsync --system-directory --list-files
I need one application in docker image which requires some specific version of libraries that have to be built from source.
So I am building it during the Docker build process.
Problem is, that it takes so long time (about 30mins).
I am wondering if it's possible to save it to the cache layer and skip it if the build process is done next time.
Here is the critical part of code from Dockerfile:
ADD https://sqlite.org/2022/sqlite-autoconf-3380200.tar.gz sqlite-autoconf-3380200.tar.gz
RUN tar -xvzf sqlite-autoconf-3380200.tar.gz
WORKDIR sqlite-autoconf-3380200
RUN ./configure
RUN make
RUN make install
WORKDIR /tmp
ADD https://download.osgeo.org/proj/proj-9.0.0.tar.gz proj-9.0.0.tar.gz
RUN tar -xvzf proj-9.0.0.tar.gz
WORKDIR proj-9.0.0
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN cmake --build .
RUN cmake --build . --target install
RUN projsync --system-directory --list-files
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
关于Docker层缓存的重要细节是,如果以前的步骤的任何都发生了变化,则将重建以下步骤的所有所有。因此,对于您的设置,如果您更改了早期依赖项中的任何内容,它将导致所有以后的步骤再次重建。
在这种情况下,docker 多阶段构建可以帮助您。这个想法是,您将在自己的图像中构建每个库,因此每个库的构建都可以独立缓存。然后,您可以将所有构建结果复制到最终图像中。
我将在此处描述的具体方法(a)所有组件安装到
/usr/local
,(b)/usr/local
最初是空的,(c)不同的库安装之间没有冲突。您应该能够使其适应其他文件系统布局。下面的所有内容都在同一码头中。
我将在选择基础Linux-Distribution图像的第一阶段。如果您知道您将始终需要安装某些内容 - TLS CA证书,强制包装更新 - 您可以将其放在此处。实现这一目标有助于确保一切都与一致的基础建立。
由于您需要构建多个内容,因此下一个阶段将安装任何构建时间依赖关系。 C工具链及其依赖项很大,因此,由于可以在后来的阶段共享工具链,因此拥有此独立的时间和空间。
现在,对于每个单独的库,您都有一个单独的构建阶段,可以下载源,构建并将其安装到
/usr/local
中。要实际构建您的应用程序,您将需要C工具链,此外,您还需要这些各种库。
完成所有操作后,在
App
图像中,/usr/local
树将拥有所有已安装的库(copy> copy
从上一个图像中)加上您的应用程序。因此,对于最后阶段,请从原始OS映像(无C工具链)开始,然后复制
/usr/local/local
tree(无原始源)。假设您更新到
proj
的较新的补丁版本。在sqlite
路径中,base
和build-deps
层没有更改,add
和运行
命令是相同的,因此此阶段完全来自缓存。PROJ
已重建。这将导致复制-from = proj
步骤使app
阶段中的高速缓存无效,并且您将针对新的库重建应用程序。The important detail about Docker layer caching is that, if any of the previous steps have changed, then all of the following steps will be rebuilt. So for your setup, if you change anything in one of the earlier dependencies, it will cause all of the later steps to be rebuilt again.
This is a case where Docker multi-stage builds can help. The idea is that you'd build each library in its own image, and therefore each library build can be independently cached. You can then copy all of the build results into a final image.
The specific approach I'll describe here assumes (a) all components install into
/usr/local
, (b)/usr/local
is initially empty, and (c) there aren't conflicts between the different library installations. You should be able to adapt it to other filesystem layouts.Everything below is in the same Dockerfile.
I'd make a very first stage selecting a base Linux-distribution image. If you know you'll always need to install something – TLS CA certificates, mandatory package updates – you can put it here. Having this helps ensure that everything is being built against a consistent base.
Since you have multiple things you need to build, a next stage will install any build-time dependencies. The C toolchain and its dependencies are large, so having this separate saves time and space since the toolchain can be shared across the later stages.
Now for each individual library, you have a separate build stage that downloads the source, builds it, and installs it into
/usr/local
.To actually build your application, you'll need the C toolchain, plus you'll also need these various libraries.
Once you've done all of this, in the
app
image, the/usr/local
tree will have all of the installed libraries (COPY
ed from the previous image) plus your application. So for the final stage, start from the original OS image (without the C toolchain) andCOPY
the/usr/local
tree in (without the original sources).Let's say you update to a newer patch version of
proj
. In thesqlite
path, thebase
andbuild-deps
layers haven't changed and theADD
andRUN
commands are the same, so this stage runs entirely from cache.proj
is rebuilt. That will cause theCOPY --from=proj
step to invalidate the cache in theapp
stage, and you'll rebuild your application against the newer library.