如何让 Windows 编译 C++ 的速度与 Linux 一样快?

发布于 2024-11-27 22:34:00 字数 1318 浏览 5 评论 0原文

我知道这并不是一个编程问题,但它是相关的。

我从事一个相当大型跨平台项目。在 Windows 上我使用 VC++ 2008。在 Linux 上我使用 gcc。项目中有大约 40k 个文件。在编译和链接同一项目时,Windows 比 Linux 慢 10 倍到 40 倍。我该如何解决这个问题?

Linux 上的单次更改增量构建需要 20 秒并且> Windows 上 3 分钟。为什么?我什至可以在 Linux 中安装“黄金”链接器,并将时间缩短至 7 秒。

同样,git 在 Linux 上的速度比 Windows 上快 10 到 40 倍。

在 git 的情况下,git 可能不是以最佳方式使用 Windows,而是使用 VC++?您可能会认为微软希望让自己的开发人员尽可能高效地工作,而更快的编译将大大有助于实现这一目标。也许他们试图鼓励开发人员使用 C#?

作为简单的测试,找到一个有很多子文件夹的文件夹,并

dir /s > c:\list.txt

在 Windows 上做一个简单的测试。执行两次并为第二次运行计时,以便它从缓存中运行。将文件复制到 Linux 并执行等效的 2 次运行并对第二次运行进行计时。

ls -R > /tmp/list.txt

我有 2 个规格完全相同的工作站。 HP Z600s 具有 12gig 内存、8 核、3.0GHz。在包含约 400k 个文件的文件夹中,Windows 需要 40 秒,Linux 则需要 <10 秒。 1秒。

是否可以通过注册表设置来加速 Windows?什么给?


一些稍微相关的链接,与编译时间相关,不一定是 i/o。

I know this is not so much a programming question but it is relevant.

I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?

A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.

Similarly git is 10x to 40x faster on Linux than Windows.

In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?

As simple test, find a folder with lots of subfolders and do a simple

dir /s > c:\list.txt

on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.

ls -R > /tmp/list.txt

I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.

Is there a registry setting I can set to speed up Windows? What gives?


A few slightly relevant links, relevant to compile times, not necessarily i/o.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(13

雪化雨蝶 2024-12-04 22:34:00

除非有铁杆 Windows 系统黑客出现,否则您只会得到党派评论(我不会这样做)和猜测(这是我要尝试的)。

  1. 文件系统 - 您应该在同一文件系统上尝试相同的操作(包括dir)。我遇到了这个,它对一些文件系统的各种参数进行了基准测试。

  2. 缓存。我曾经尝试在 Linux 上的 RAM 磁盘上运行编译,发现由于内核处理缓存的方式,它比在磁盘上运行要慢。这是 Linux 的一个坚实的卖点,也可能是性能如此不同的原因。缓存

  3. Windows 上的依赖关系规范不正确。也许 Windows 的 chromium 依赖规范不如 Linux 正确。当您进行小更改时,这可能会导致不必要的编译。您也许可以在 Windows 上使用相同的编译器工具链来验证这一点。

Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).

  1. File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.

  2. Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.

  3. Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.

沩ん囻菔务 2024-12-04 22:34:00

一些想法:

  1. 禁用 8.3 名称。对于具有大量文件和相对少量文件夹的驱动器来说,这可能是一个重要因素:fsutilbehaviour setdisable8dot3 1
  2. 使用更多文件夹。根据我的经验,当每个文件夹的文件数量超过 1000 个时,NTFS 的速度就会开始变慢。
  3. 使用 MSBuild 启用并行构建;只需添加“/m”开关,它就会自动为每个 CPU 核心启动一份 MSBuild 副本。
  4. 将文件放在 SSD 上——对随机 I/O 有很大帮助。
  5. 如果您的平均文件大小远大于 4KB,请考虑使用与您的平均文件大小大致对应的较大簇大小来重建文件系统。
  6. 确保文件已进行碎片整理。碎片文件会导致大量磁盘寻道,这可能会导致吞吐量增加 40 倍以上。使用 sysinternals 中的“contig”实用程序或内置的 Windows 碎片整理程序。
  7. 如果您的平均文件大小很小,并且您所在的分区相对较满,则您可能正在使用碎片化的 MFT 运行,这对性能不利。此外,小于1K的文件直接存储在MFT中。上面提到的“contig”实用程序可以提供帮助,或者您可能需要增加 MFT 大小。以下命令会将其加倍,达到卷的 25%: fsutil behavior set mftzone 2 将最后一个数字更改为 3 或 4,以额外增加 12.5% 的增量。运行命令后,重新启动,然后创建文件系统。
  8. 禁用上次访问时间:fsutilbehavior setdisablelastaccess1
  9. 禁用索引服务
  10. 禁用您的防病毒和反间谍软件,或者至少将相关文件夹设置为忽略。
  11. 将文件放在与操作系统和页面文件不同的物理驱动器上。使用单独的物理驱动器允许 Windows 对两个驱动器使用并行 I/O。
  12. 查看您的编译器标志。 Windows C++ 编译器有大量选项;确保您只使用您真正需要的那些。
  13. 尝试增加操作系统用于分页池缓冲区的内存量(首先确保您有足够的 RAM):fsutil Behaviour Set Memoryusage 2
  14. 检查 Windows 错误日志以确保您不会遇到偶尔出现的问题磁盘错误。
  15. 查看物理磁盘相关的性能计数器,了解磁盘的繁忙程度。队列长度过长或每次传输时间过长都是不好的迹象。
  16. 就原始传输时间而言,前 30% 的磁盘分区比磁盘的其余部分快得多。较窄的分区还有助于最大限度地减少寻道时间。
  17. 您使用 RAID 吗?如果是这样,您可能需要优化 RAID 类型的选择(RAID-5 不适合编译等写入量大的操作)
  18. 禁用任何不需要的服务对
  19. 文件夹进行碎片整理:将所有文件复制到另一个驱动器(仅文件) ,删除原始文件,将所有文件夹复制到另一个驱动器(仅空文件夹),然后删除原始文件夹,对原始驱动器进行碎片整理,先将文件夹结构复制回来,然后复制文件。当 Windows 一次一个文件地构建大型文件夹时,文件夹最终会变得碎片化且速度缓慢。 (“contig”在这里也应该有帮助)
  20. 如果您受 I/O 限制并且有空闲的 CPU 周期,请尝试打开磁盘压缩。它可以为高度可压缩的文件(如源代码)提供一些显着的加速,但会消耗一些 CPU 成本。

A few ideas:

  1. Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
  2. Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
  3. Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
  4. Put your files on an SSD -- helps hugely for random I/O.
  5. If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
  6. Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
  7. If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
  8. Disable last access time: fsutil behavior set disablelastaccess 1
  9. Disable the indexing service
  10. Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
  11. Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
  12. Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
  13. Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
  14. Check the Windows error log to make sure you aren't experiencing occasional disk errors.
  15. Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
  16. The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
  17. Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
  18. Disable any services that you don't need
  19. Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
  20. If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
说好的呢 2024-12-04 22:34:00

NTFS每次都节省文件访问时间。您可以尝试禁用它:
“fsutil 行为设置禁用最后访问 1”
(重新启动)

NTFS saves file access time everytime. You can try disabling it:
"fsutil behavior set disablelastaccess 1"
(restart)

弱骨蛰伏 2024-12-04 22:34:00

据我所知,Visual C++ 的问题是,优化此场景并不是编译器团队的首要任务。
他们的解决方案是使用他们的预编译头功能。这就是 Windows 特定项目所做的事情。它不便于携带,但可以工作。

此外,在 Windows 上,您通常拥有病毒扫描程序以及系统恢复和搜索工具,如果它们为您监视您的 buid 文件夹,则可能会完全破坏您的构建时间。 Windows 7 资源监视器可以帮助您发现它。
如果您真的感兴趣,我这里有一个回复,其中包含一些优化 vc++ 构建时间的进一步提示。

The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario.
Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.

Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it.
I have a reply here with some further tips for optimizing vc++ build times if you're really interested.

柠檬 2024-12-04 22:34:00

做到这一点的困难是由于 C++ 倾向于将自身和编译过程分散到许多小的、单独的文件上。这是 Linux 所擅长的,而 Windows 则不擅长。如果你想为 Windows 制作一个真正快速的 C++ 编译器,请尝试将所有内容保留在 RAM 中并尽可能少地接触文件系统。

这也是您创建更快的 Linux C++ 编译链的方法,但它在 Linux 中不太重要,因为文件系统已经为您做了很多调整。

造成这种情况的原因是由于 Unix 文化:
从历史上看,文件系统性能在 Unix 世界中的优先级比在 Windows 中高得多。并不是说它在 Windows 中没有优先级,只是在 Unix 中它具有更高的优先级。

  1. 访问源代码。

    你无法改变你无法控制的事情。无法访问 Windows NTFS 源代码意味着大多数提高性能的努力都是通过硬件改进来实现的。也就是说,如果性能很慢,您可以通过改进硬件来解决问题:总线、存储介质等。如果您必须解决问题而不是解决问题,那么您只能做这么多。

    对 Unix 源代码(甚至在开源之前)的访问更加广泛。因此,如果您想提高性能,您可以首先在软件中解决它(更便宜且更容易),然后再在硬件中解决。

    因此,世界上有很多人通过研究 Unix 文件系统并寻找提高性能的新方法来获得博士学位。

  2. Unix 倾向于许多小文件; Windows 倾向于使用几个(或单个)大文件。

    Unix 应用程序倾向于处理许多小文件。想象一个软件开发环境:许多小的源文件,每个文件都有自己的用途。最后阶段(链接)确实会创建一个大文件,但这只是一小部分。

    因此,Unix 对打开和关闭文件、扫描目录等的系统调用进行了高度优化。 Unix 研究论文的历史跨越了数十年的文件系统优化,在改进目录访问(查找和全目录扫描)、初始文件打开等方面投入了大量的精力。

    Windows 应用程序倾向于打开一个大文件,长时间保持打开状态,完成后关闭它。想想 MS-Word。 msword.exe(或其他)打开文件一次并追加几个小时,更新内部块等等。优化文件打开的价值会浪费时间。

    Windows 基准测试和优化的历史一直是关于读取或写入长文件的速度。这就是优化的内容。

    可悲的是,软件开发已经趋向于第一种情况。哎呀,Unix 上最好的文字处理系统 (TeX/LaTeX) 鼓励您将每一章放在不同的文件中,并 #include 将它们全部放在一起。

  3. Unix 专注于高性能; Windows 注重用户体验

    Unix 在服务器机房启动:没有用户界面。用户唯一看到的就是速度。因此,速度是首要任务。

    Windows 从桌面开始:用户只关心他们看到的内容,他们看到的是 UI。因此,更多的精力花在改善UI上而不是性能上。

  4. Windows 生态系统依赖于有计划的废弃。当新硬件问世仅一两年后,为什么要优化软件?

    我不相信阴谋论,但如果我相信,我会指出在 Windows 文化中,提高性能的激励措施较少。 Windows 商业模式依赖于人们购买像发条一样的新机器。 (这就是为什么如果微软延迟发布操作系统或英特尔错过芯片发布日期,数千家公司的股价都会受到影响。)。这意味着有动力通过告诉人们购买新硬件来解决性能问题;不是通过改善真正的问题:缓慢的操作系统。 Unix 来自学术界,那里的预算很紧张,你可以通过发明一种使文件系统更快的新方法来获得博士学位;学术界很少有人通过发出采购订单解决问题而获得积分。在 Windows 中,并不存在让软件变慢的阴谋,但整个生态系统取决于有计划的废弃。

    此外,由于 Unix 是开源的(即使不是,每个人都可以访问源代码)任何无聊的博士生都可以阅读代码并通过改进代码而出名。 Windows 中不会发生这种情况(MS 确实有一个程序可以让学者访问 Windows 源代码,但很少被利用)。查看以下精选的 Unix 相关性能论文:http://www.eecs.harvard.edu /margo/papers/ 或查找 Osterhaus、Henry Spencer 或其他人的论文历史。哎呀,Unix 历史上最大的(也是最有趣的)辩论之一是 Osterhaus 和 Selzer 之间的争论 http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
    在 Windows 世界中您看不到这种事情发生。您可能会看到供应商相互竞争,但最近这种情况似乎更加罕见,因为创新似乎都在标准机构级别。

我就是这么看的。

更新:如果您查看 Microsoft 推出的新编译器链,您会非常乐观,因为他们所做的大部分工作使得更容易将整个工具链保留在 RAM 中并减少重复次数工作。非常令人印象深刻的东西。

The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.

That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.

The reason for this is due to Unix culture:
Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.

  1. Access to source code.

    You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.

    Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.

    As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.

  2. Unix tends towards many small files; Windows tends towards a few (or a single) big file.

    Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.

    As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.

    Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.

    The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.

    Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.

  3. Unix is focused on high performance; Windows is focused on user experience

    Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.

    Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.

  4. The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?

    I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.

    Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/ or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
    You don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.

That's how I see it.

Update: If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.

国际总奸 2024-12-04 22:34:00

我个人发现在 Linux 上运行 Windows 虚拟机能够消除 Windows 中的大量 IO 缓慢问题,可能是因为 Linux 虚拟机进行了大量的缓存,而 Windows 本身则没有。

通过这样做,我能够将我正在处理的大型 (250Kloc) C++ 项目的编译时间从大约 15 分钟缩短到大约 6 分钟。

I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.

Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.

绅士风度i 2024-12-04 22:34:00

增量链接

如果 VC 2008 解决方案设置为具有 .lib 输出的多个项目,则需要设置“使用库依赖项输入”;这使得链接器直接链接到 .obj 文件而不是 .lib。 (并且实际上使其增量链接。)

目录遍历性能

将原始计算机上的目录爬行与在另一台计算机上爬行具有相同文件的新创建的目录进行比较有点不公平。如果您想要进行等效测试,您可能应该在源计算机上制作该目录的另一个副本。 (它可能仍然很慢,但这可能是由于多种原因造成的:磁盘碎片、短文件名、后台服务等)尽管我认为 dir /s 的性能问题有更多与写入输出相关,而不是测量实际文件遍历性能。甚至 dir /s /b > nul 在我的机器上有一个巨大的目录,速度很慢。

Incremental linking

If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)

Directory traversal performance

It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.

妞丶爷亲个 2024-12-04 22:34:00

我很确定它与文件系统有关。我从事 Linux 和 Windows 的跨平台项目,其中所有代码都是通用的,除非平台相关的代码是绝对必要的。我们使用 Mercurial,而不是 git,因此 git 的“Linuxness”不适用。与 Linux 相比,在 Windows 上从中央存储库提取更改需要花费很长时间,但我不得不说我们的 Windows 7 计算机比 Windows XP 计算机做得更好。之后在 VS 2008 上编译代码更糟糕。这不仅仅是 hg ; CMake 在 Windows 上的运行速度也慢很多,而且这两个工具都更多地使用文件系统。

这个问题非常严重,以至于我们大多数在 Windows 环境中工作的开发人员甚至不再费心进行增量构建 - 他们发现 进行统一构建 速度更快。

顺便说一句,如果你想大幅降低 Windows 中的编译速度,我建议使用前面提到的 Unity 构建。在构建系统中正确实现是一件很痛苦的事情(我在 CMake 中为我们的团队做到了这一点),但是一旦完成,就会自动加快我们持续集成服务器的速度。根据您的构建系统生成的二进制文件数量,您可以获得 1 到 2 个数量级的改进。您的里程可能会有所不同。在我们的例子中,我认为它使 Linux 构建速度加快了三倍,Windows 构建速度加快了约 10 倍,但我们有很多共享库和可执行文件(这降低了统一构建的优势)。

I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.

The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.

Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).

夜雨飘雪 2024-12-04 22:34:00

您如何构建大型跨平台项目?
如果您使用适用于 Linux 和 Windows 的通用 makefile,并且这些 makefile 的设计不适合在 Windows 上运行,那么您很容易就会使 Windows 性能降低 10 倍。

我刚刚使用适用于 Linux 和 Windows 的通用 (GNU) makefile 修复了跨平台项目的一些 makefile。 Make 正在为配方的每一行启动一个 sh.exe 进程,从而导致 Windows 和 Linux 之间的性能差异!

根据 GNU make 文档

.ONESHELL:

应该可以解决该问题,但 Windows make(当前)不支持此功能。因此,将食谱重写为单个逻辑行(例如,通过在当前编辑器行末尾添加 ;\ 或 \)效果非常好!

How do you build your large cross platform project?
If you are using common makefiles for Linux and Windows you could easily degrade windows performance by a factor of 10 if the makefiles are not designed to be fast on Windows.

I just fixed some makefiles of a cross platform project using common (GNU) makefiles for Linux and Windows. Make is starting a sh.exe process for each line of a recipe causing the performance difference between Windows and Linux!

According to the GNU make documentation

.ONESHELL:

should solve the issue, but this feature is (currently) not supported for Windows make. So rewriting the recipes to be on single logical lines (e.g. by adding ;\ or \ at the end of the current editor lines) worked very well!

小帐篷 2024-12-04 22:34:00

恕我直言,这都是关于磁盘 I/O 性能的。这个数量级表明,在 Windows 下,许多操作都会写入磁盘,而在 Linux 下,它们是在内存中处理的,即 Linux 的缓存性能更好。 Windows 下的最佳选择是将文件移动到快速磁盘、服务器或文件系统上。考虑购买固态硬盘或将文件移动到 ramdisk 或快速 NFS 服务器。

我运行了目录遍历测试,结果与报告的编译时间非常接近,表明这与 CPU 处理时间或编译器/链接器算法完全无关。

按照上面建议的方式测量 chromium 目录树的时间:

  • NTFS 上的 Windows Home Premium 7 (8GB Ram):32 秒
  • NTFS 上的 Ubuntu 11.04 Linux (2GB Ram):10 秒
  • ext4 上的 Ubuntu 11.04 Linux (2GB Ram):0.6

秒测试我提取了 chromium 源(都在 win/linux 下)

git clone http://github.com/chromium/chromium.git 
cd chromium
git checkout remotes/origin/trunk 

为了测量我运行的时间,

ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds  #Powershell

我确实关闭了访问时间戳、我的病毒扫描程序并增加了Windows 下的缓存管理器设置(>2Gb RAM) - 所有这些都没有任何明显的改进。事实是,开箱即用的 Linux 的性能比 Windows 好 50 倍,而 RAM 仅为 Windows 的四分之一。

对于任何想要争论数字错误的人 - 无论出于何种原因 - 请尝试一下并发布你的发现。

IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.

I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.

Measured times as suggested above traversing the chromium directory tree:

  • Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
  • Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
  • Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds

For the tests I pulled the chromium sources (both under win/linux)

git clone http://github.com/chromium/chromium.git 
cd chromium
git checkout remotes/origin/trunk 

To measure the time I ran

ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds  #Powershell

I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.

For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.

_蜘蛛 2024-12-04 22:34:00

尝试使用 jom 而不是 nmake

在此处获取:
https://github.com/qt-labs/jom

事实上,nmake正在使用只是你的核心之一,jom 是 nmake 的克隆,它利用多核处理器。

由于 -j 选项,GNU make 开箱即用,这可能是其速度与 Microsoft nmake 相比的原因。

jom 的工作原理是在不同的处理器/内核上并行执行不同的 make 命令。
亲自尝试一下,感受一下差异!

Try using jom instead of nmake

Get it here:
https://github.com/qt-labs/jom

The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.

GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.

jom works by executing in parallel different make commands on different processors/cores.
Try yourself an feel the difference!

可是我不能没有你 2024-12-04 22:34:00

我想在 Windows 上使用 Gnu make 和 MinGW 工具中的其他工具添加一项观察结果:即使这些工具甚至无法通过 IP 进行通信,它们似乎也能解析主机名。我猜这是由 MinGW 运行时的某些初始化例程引起的。运行本地 DNS 代理帮助我提高了这些工具的编译速度。

之前我很头疼,因为当我并行打开 VPN 连接时,构建速度下降了 10 倍左右。在本例中,所有这些 DNS 查找都通过 VPN。

这一观察结果可能也适用于其他构建工具,不仅是基于 MinGW,而且它可能同时在最新的 MinGW 版本上发生了变化。

I want to add just one observation using Gnu make and other tools from MinGW tools on Windows: They seem to resolve hostnames even when the tools can not even communicate via IP. I would guess this is caused by some initialisation routine of the MinGW runtime. Running a local DNS proxy helped me to improve the compilation speed with these tools.

Before I got a big headache because the build speed dropped by a factor of 10 or so when I opened a VPN connection in parallel. In this case all these DNS lookups went through the VPN.

This observation might also apply to other build tools, not only MinGW based and it could have changed on the latest MinGW version meanwhile.

情绪操控生活 2024-12-04 22:34:00

我最近可以归档另一种方法,通过使用 win-bash

(win-bash 在交互式编辑方面不太舒服。)

I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash

(The win-bash is not very comfortable regarding interactive editing.)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文