与默认的 4KB 相比,16KB 或 32KB 的 NTFS 分配块会使编译时间更快吗?
与默认的 4KB 相比,16KB 或 32KB 的 NTFS 分配块会使编译时间更快吗?
Would NTFS allocation blocks of 16KB or 32KB make compile time faster in comparison to the default 4KB?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我无法想象这会产生很大的影响 - 磁盘块大小与编译速度相去甚远。 考虑到现代操作系统的缓存量,它似乎不太重要。
当然,真正的答案可以通过测量来找到。 不过,在具有不同磁盘块大小的不同机器之间获得相似的条件可能会很棘手。
I can't imagine that would make much of a difference - disk block size is pretty far removed from compile speed. With the amount of caching a modern OS does, it seems unlikely to be significant.
The real answer, of course, can be found by measuring it. Getting similar conditions between different machines with different disk block sizes might be tricky, though.
我的猜测是,磁盘碎片将是决定编译速度的最大因素(即,对于适当大小的代码库)。
My guess would be that disk fragmentation would be the biggest factor in determining compile speeds (that is, for a code base of decent size).
至少根据我的经验,大将军是正确的。 较大的项目/解决方案在生成最终二进制文件的过程中会创建许多小的临时文件。 我发现,如果我每周左右对磁盘进行一次碎片整理(即使碎片整理程序不建议这样做),我不会看到如果我不这样做的话所经历的性能下降。
作为一个佐证因素,与我一起工作的几个人也有同样的经历。
Dashogun is correct, at least in my experience. Larger projects / solutions create a lot of small, temporary files on the way to producing the final binary(ies). I find that if I defragment my disk once a week or so (even if the defragmenter does not recommend it) I do not see the performance degradation that I experience if I fail to do that.
As a corroborating factor, there are a couple of guys I work with that have the same experiences.