提高 ar 归档性能

发布于 2024-08-15 22:45:06 字数 1066 浏览 9 评论 0原文

我有一个包含大量自动生成代码的项目,我们将其构建到静态库中,然后链接到最终的可执行文件。我们使用 gcc/gnat 5.04a 文件太多,我们必须将作业分成批次并多次调用 ar 来构建库(为了避免命令行长度限制),eg:

 [echo] Archiving codegen                   
 [echo] Deleting old codegen archive                     
   [ar] Using ar found in /usr/bin          
   [ar] Batch 1 of 7 completed in 37.871 s  
   [ar] Batch 2 of 7 completed in 55.796 s  
   [ar] Batch 3 of 7 completed in 89.709 s  
   [ar] Batch 4 of 7 completed in 256.894 s 
   [ar] Batch 5 of 7 completed in 196.704 s 
   [ar] Batch 6 of 7 completed in 248.334 s 
   [ar] Batch 7 of 7 completed in 243.759 s 
   [ar] Archiving took: 1129.067 s          
   [ar] Using ranlib found in /usr/bin      
   [ar] Indexing took: 247.223 s            
 [echo] Done with codegen                   

我们正在寻找潜在的速度改进。看起来,随着存档的增长,每个批次花费的时间越来越长,大概是因为在添加对象之前需要搜索更多内容(更新)。这似乎就是为什么删除存档比直接更新旧存档更快的原因。为了追求更快的速度,我们对 ar 命令使用标志“qcS”。根据手册页,“q”(应该是快速追加)实际上是“r”(“使用替换”)的同义词,“c”创建存档(那里没有什么特别的),“S”跳过生成索引(我们在最后再次使用“ranlib”进行介绍。

是否有任何方便的方法,使用内置工具,使此操作更快?如果“快速追加”模式有效,这可能就是我们想要的,但是唉。

I have a project with a huge amount of auto-generated code, which we build into a static library before linking into the final executable. We use gcc/gnat 5.04a There are so many files, we have to break the job into batches and invoke ar multiple times to construct the library (in order to avoid the command-line length limitation), e.g:

 [echo] Archiving codegen                   
 [echo] Deleting old codegen archive                     
   [ar] Using ar found in /usr/bin          
   [ar] Batch 1 of 7 completed in 37.871 s  
   [ar] Batch 2 of 7 completed in 55.796 s  
   [ar] Batch 3 of 7 completed in 89.709 s  
   [ar] Batch 4 of 7 completed in 256.894 s 
   [ar] Batch 5 of 7 completed in 196.704 s 
   [ar] Batch 6 of 7 completed in 248.334 s 
   [ar] Batch 7 of 7 completed in 243.759 s 
   [ar] Archiving took: 1129.067 s          
   [ar] Using ranlib found in /usr/bin      
   [ar] Indexing took: 247.223 s            
 [echo] Done with codegen                   

We are looking for potential speed improvements. It appears that, as the archive grows, each batch takes longer and longer, presumably because it has more to search (for updates) before adding objects. This appears to be why deleting the archive makes it quicker than just updating the old archive in place. In our quest for more speed, we use the flags "qcS" to the ar command. According to the man page, "q" (which should be quick-append) is really a synonym for "r" (which is "use replacement"), "c" creates the archive (nothing special there) and "S" skips generating an index (which we cover by using "ranlib" again at the end.

Is there any convenient way, using built-in-tools, to make this operation faster? If "quick append" mode worked that would probably be what we want, but alas.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

云归处 2024-08-22 22:45:07

我们发现时间问题的很大一部分是存档文件的位置。上面的数字适用于 NAS 设备上的对象和存档文件。在本地硬盘(临时存储)上执行相同的操作可将时间缩短至约 20 - 40 秒。复制所有文件、进行本地存档并将结果复制回来比直接在 NAS 上存档需要更长的时间,但我们正在考虑将整个构建过程移动到本地临时存储,这应该会大大提高性能。

We found that a huge part of the timing issue was the location of the files being archived. The numbers above are for object and archive files located on a NAS device. Doing the same operation on a local hard disk (temporary storage) reduces the time to ~20 - 40 seconds. Copying all the files, doing local archive, and copying the result back takes longer than archiving directly on the NAS, but we're looking at moving our entire build process to local temporary storage, which should improve performance substantially.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文