本地磁盘上的Bazel远程缓存

发布于 2025-02-06 08:57:55 字数 393 浏览 1 评论 0原文

我正在尝试更好地理解和利用远程缓存。

我正在Docker容器中运行 bazel build ,同时在本地磁盘上指定远程缓存目录:

docker run --rm -it -v $PWD:/work -w /work bazel:latest \
  bazel build --disk_cache=.bazel-disk-cache //...

在此运行结束时,远程缓存在 .bazel-disk-中填充当地磁盘上的高速缓存目录。

然后,我再次运行了同一命令。这次是一个具有空本地缓存的新容器实例,但是存在远程缓存。但是,完成任务几乎花了几乎相同的时间来完成任务。

那是预期的吗?我希望减少使用远程缓存时的构建时间。我想念什么?

I'm trying to better understand and leverage remote caching.

I'm running bazel build in a docker container, while specifying remote cache directory on a local disk:

docker run --rm -it -v $PWD:/work -w /work bazel:latest \
  bazel build --disk_cache=.bazel-disk-cache //...

At the end of this run, the remote cache is populated in the .bazel-disk-cache directory on the local disk.

Then I ran the same command again. This time it is a new container instance with an empty local cache, but the remote cache is present. However, it took almost the same amount of time to complete the task, as the first run.

Is that expected? I was hoping to reduce the build time when using remote cache. What do I miss?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

小糖芽 2025-02-13 08:57:55

我做了以下操作,并且确实经历了缩短的构建时间

启动Bazel Remote Cache,并使用Docker使用指标端点启用了启用指标端点

docker run -u 1000:1000 -v $(pws)/bzl-cache:/data -p 9090:8080 -p 9092:9092 buchgr/bazel-remote-cache --max_size 5 --enable_endpoint_metrics

检查BAZEL远程缓存容器IP地址

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID>
# assuming it is 172.17.0.2

检查缓存的当前状态

curl 172.17.0.2:8080/status

此返回(根据您的设置和设置,不同的数字) Bazel远程缓存中已经填充的数据)

{
    "CurrSize": 7651328,
    "UncompressedSize": 18616320,
    "ReservedSize": 0,
    "MaxSize": 5368709120,
    "NumFiles": 14,
    "ServerTime": 1690295548,
    "GitCommit": "dc4aeace0af5b893c96bd994a816dfbaba9b18c2",
    "NumGoroutines": 9
}

Next Spinnup另一个容器,并用Bazel构建一些东西,我编译了redis(本地事物)

bazel build --remote_cache=http://172.17.0.2:8080 @redis//:build

Bazel远程缓存将报告几个POT请求,并且将数据放在缓存。就我而言,花了大约2分钟的时间完成。之后,我清理了当地的东西,

bazel clean && bazel build --remote_cache=http://172.17.0.2:8080 @redis//:build

这次只花了5秒钟,然后再次打电话给Build,在Bazel Remote Cache上产生了请求,因为所有数据都从那里下载。为了产生更多的负载,我准备了一堆平行的呼叫,以几乎相同的结果,大约5秒钟,有时由于缓存上的负载以处理所有连接,有时是7秒。

I did the following and did experience a reduced build time

Start the Bazel Remote Cache with Docker with the metrics endpoint enabled

docker run -u 1000:1000 -v $(pws)/bzl-cache:/data -p 9090:8080 -p 9092:9092 buchgr/bazel-remote-cache --max_size 5 --enable_endpoint_metrics

Check the Bazel Remote Cache container IP address

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID>
# assuming it is 172.17.0.2

Check the current status of the cache

curl 172.17.0.2:8080/status

This returns something like (different numbers depending on your setup and already populated data in the Bazel Remote Cache)

{
    "CurrSize": 7651328,
    "UncompressedSize": 18616320,
    "ReservedSize": 0,
    "MaxSize": 5368709120,
    "NumFiles": 14,
    "ServerTime": 1690295548,
    "GitCommit": "dc4aeace0af5b893c96bd994a816dfbaba9b18c2",
    "NumGoroutines": 9
}

Next spinnup another container and build something with Bazel, I compiled redis (a local thing)

bazel build --remote_cache=http://172.17.0.2:8080 @redis//:build

The Bazel Remote Cache will report several PUT requests and data will be placed in the cache. In my case it took round about 2 minutes to complete. After that I cleaned the local things and called build again

bazel clean && bazel build --remote_cache=http://172.17.0.2:8080 @redis//:build

This time it took only 5 seconds, producing GET requests on the Bazel Remote Cache as all the data got downloaded from there. To produce more load I prepared a bunch of parallel calls to the cache with almost the same result, round about 5 seconds, sometimes 7 seconds due to the load on the cache to handle all the connections.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文