用 g++ 编译 使用多核

发布于 2024-07-11 10:30:02 字数 70 浏览 8 评论 0原文

快速问题:允许 g++ 生成自身的多个实例以便更快地编译大型项目(例如,多核 CPU 一次 4 个源文件)的编译器标志是什么?

Quick question: what is the compiler flag to allow g++ to spawn multiple instances of itself in order to compile large projects quicker (for example 4 source files at a time for a multi-core CPU)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

十雾 2024-07-18 10:30:02

您可以使用 make 来完成此操作 - 使用 gnu make 时,它​​是 -j 标志(这对单处理器机器也有帮助)。

例如,如果您希望 make 执行 4 个并行作业:

make -j 4

您还可以在管道中运行 gcc,

gcc -pipe

这将管道化编译阶段,这也将有助于保持核心繁忙。

如果您还有其他可用的机器,您可以查看 distcc,它也会对这些机器进行编译。

You can do this with make - with gnu make it is the -j flag (this will also help on a uniprocessor machine).

For example if you want 4 parallel jobs from make:

make -j 4

You can also run gcc in a pipe with

gcc -pipe

This will pipeline the compile stages, which will also help keep the cores busy.

If you have additional machines available too, you might check out distcc, which will farm compiles out to those as well.

盗心人 2024-07-18 10:30:02

没有这样的标志,并且有一个这样的标志违背了 Unix 的哲学,即让每个工具只执行一项功能并且执行良好。 从概念上讲,生成编译器进程是构建系统的工作。 您可能正在寻找的是 GNU make 的 -j(作业)标志,

make -j4

或者您可以使用 pmake 或类似的并行 make 系统。

There is no such flag, and having one runs against the Unix philosophy of having each tool perform just one function and perform it well. Spawning compiler processes is conceptually the job of the build system. What you are probably looking for is the -j (jobs) flag to GNU make, a la

make -j4

Or you can use pmake or similar parallel make systems.

贪了杯 2024-07-18 10:30:02

如果使用 make,请使用 -j 发出问题。 来自 man make

<代码> -j [作业], --jobs[=作业] 
         指定同时运行的作业(命令)数量。   
         如果有多个 -j 选项,则最后一个有效。 
         如果给出 -j 选项时不带参数,make 将不会限制 
         可以同时运行的作业数量。 
  

最值得注意的是,如果您想编写脚本或确定可用的核心数量(取决于您的环境,如果您在许多环境中运行,这可能会发生很大变化),您可以使用无处不在的 Python 函数 cpu_count() :

https://docs.python.org/3/ Library/multiprocessing.html#multiprocessing.cpu_count

像这样:

make -j $(python3 -c 'import multiprocessing as mp; print(int(mp.cpu_count() * 1.5))')

如果你问为什么 1.5 我会在上面的评论中引用用户 artless-noise :

1.5 的数字是因为注意到的 I/O 限制问题。 这是一个经验法则。 大约 1/3 的作业将等待 I/O,因此其余作业将使用可用的内核。 数量大于核心数更好,甚至可以高达 2 倍。

If using make, issue with -j. From man make:

  -j [jobs], --jobs[=jobs]
       Specifies the number of jobs (commands) to run simultaneously.  
       If there is more than one -j option, the last one is effective.
       If the -j option is given without an argument, make will not limit the
       number of jobs that can run simultaneously.

And most notably, if you want to script or identify the number of cores you have available (depending on your environment, and if you run in many environments, this can change a lot) you may use ubiquitous Python function cpu_count():

https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count

Like this:

make -j $(python3 -c 'import multiprocessing as mp; print(int(mp.cpu_count() * 1.5))')

If you're asking why 1.5 I'll quote user artless-noise in a comment above:

The 1.5 number is because of the noted I/O bound problem. It is a rule of thumb. About 1/3 of the jobs will be waiting for I/O, so the remaining jobs will be using the available cores. A number greater than the cores is better and you could even go as high as 2x.

梦冥 2024-07-18 10:30:02

make 将为您完成此操作。 研究手册页中的 -j-l 开关。 我不认为 g++ 是可并行的。

make will do this for you. Investigate the -j and -l switches in the man page. I don't think g++ is parallelizable.

安稳善良 2024-07-18 10:30:02

人们提到了 makebjam 也支持类似的概念。 使用 bjam -jx 指示 bjam 构建最多 x 个并发命令。

我们在 Windows 和 Linux 上使用相同的构建脚本,并且使用此选项可以将两个平台上的构建时间减半。 好的。

People have mentioned make but bjam also supports a similar concept. Using bjam -jx instructs bjam to build up to x concurrent commands.

We use the same build scripts on Windows and Linux and using this option halves our build times on both platforms. Nice.

心房的律动 2024-07-18 10:30:02

distcc 不仅可以用于在当前计算机上分发编译,还可以在场中安装了 distcc 的其他计算机上分发编译。

distcc can also be used to distribute compiles not only on the current machine, but also on other machines in a farm that have distcc installed.

腹黑女流氓 2024-07-18 10:30:02

您可以使用 make -j$(nproc) 。 此命令用于使用 make 构建系统构建一个项目,并并行运行多个作业。

例如,如果您的系统有 4 个 CPU 核心,则运行 make -j$(nproc) 将指示 make 同时运行 4 个作业,每个 CPU 核心一个,从而加快构建过程。

您还可以通过运行此命令查看您有多少个核心;
回显$(nproc)

You can use make -j$(nproc) . This command is used to build a project using the make build system with multiple jobs running in parallel.

For example, if your system has 4 CPU cores, running make -j$(nproc) would instruct make to run 4 jobs concurrently, one on each CPU core, speeding up the build process.

You can also see how many cores you have with run this command;
echo $(nproc)

慈悲佛祖 2024-07-18 10:30:02

我不确定 g++,但如果你使用 GNU Make,那么“make -j N”(其中 N 是 make 可以创建的线程数)将允许 make 同时运行多个 g++ 作业(只要因为这些文件不相互依赖)。

I'm not sure about g++, but if you're using GNU Make then "make -j N" (where N is the number of threads make can create) will allow make to run multple g++ jobs at the same time (so long as the files do not depend on each other).

灼痛 2024-07-18 10:30:02

GNU并行

我正在制作一个综合编译基准 并且懒得编写 Makefile,所以我使用:

sudo apt-get install parallel
ls | grep -E '\.c

解释:

  • {.} 获取输入参数并删除其扩展名
  • -t 打印出来运行的命令让我们了解进度
  • --will-cite 删除了引用该软件的请求(如果您使用它发布结果)...

parallel 非常方便我什至可以自己进行时间戳检查:

ls | grep -E '\.c

xargs -P 也可以并行运行作业,但使用它进行扩展操作或运行多个命令有点不太方便:通过 xargs 调用多个命令

在以下位置询问并行链接:gcc 在链接时可以使用多个内核吗?

TODO:我想我在某处读到编译可以简化为矩阵乘法,所以也许还可以加快大文件的单个文件编译速度。 但我现在找不到参考。

在 Ubuntu 18.10 中测试。

| parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"

解释:

  • {.} 获取输入参数并删除其扩展名
  • -t 打印出来运行的命令让我们了解进度
  • --will-cite 删除了引用该软件的请求(如果您使用它发布结果)...

parallel 非常方便我什至可以自己进行时间戳检查:


xargs -P 也可以并行运行作业,但使用它进行扩展操作或运行多个命令有点不太方便:通过 xargs 调用多个命令

在以下位置询问并行链接:gcc 在链接时可以使用多个内核吗?

TODO:我想我在某处读到编译可以简化为矩阵乘法,所以也许还可以加快大文件的单个文件编译速度。 但我现在找不到参考。

在 Ubuntu 18.10 中测试。

| parallel -t --will-cite "\ if ! [ -f '{.}.o' ] || [ '{}' -nt '{.}.o' ]; then gcc -c -o '{.}.o' '{}' fi "

xargs -P 也可以并行运行作业,但使用它进行扩展操作或运行多个命令有点不太方便:通过 xargs 调用多个命令

在以下位置询问并行链接:gcc 在链接时可以使用多个内核吗?

TODO:我想我在某处读到编译可以简化为矩阵乘法,所以也许还可以加快大文件的单个文件编译速度。 但我现在找不到参考。

在 Ubuntu 18.10 中测试。

| parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"

解释:

  • {.} 获取输入参数并删除其扩展名
  • -t 打印出来运行的命令让我们了解进度
  • --will-cite 删除了引用该软件的请求(如果您使用它发布结果)...

parallel 非常方便我什至可以自己进行时间戳检查:

xargs -P 也可以并行运行作业,但使用它进行扩展操作或运行多个命令有点不太方便:通过 xargs 调用多个命令

在以下位置询问并行链接:gcc 在链接时可以使用多个内核吗?

TODO:我想我在某处读到编译可以简化为矩阵乘法,所以也许还可以加快大文件的单个文件编译速度。 但我现在找不到参考。

在 Ubuntu 18.10 中测试。

GNU parallel

I was making a synthetic compilation benchmark and couldn't be bothered to write a Makefile, so I used:

sudo apt-get install parallel
ls | grep -E '\.c

Explanation:

  • {.} takes the input argument and removes its extension
  • -t prints out the commands being run to give us an idea of progress
  • --will-cite removes the request to cite the software if you publish results using it...

parallel is so convenient that I could even do a timestamp check myself:

ls | grep -E '\.c

xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs

Parallel linking was asked at: Can gcc use multiple cores when linking?

TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.

Tested in Ubuntu 18.10.

| parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"

Explanation:

  • {.} takes the input argument and removes its extension
  • -t prints out the commands being run to give us an idea of progress
  • --will-cite removes the request to cite the software if you publish results using it...

parallel is so convenient that I could even do a timestamp check myself:


xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs

Parallel linking was asked at: Can gcc use multiple cores when linking?

TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.

Tested in Ubuntu 18.10.

| parallel -t --will-cite "\ if ! [ -f '{.}.o' ] || [ '{}' -nt '{.}.o' ]; then gcc -c -o '{.}.o' '{}' fi "

xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs

Parallel linking was asked at: Can gcc use multiple cores when linking?

TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.

Tested in Ubuntu 18.10.

| parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"

Explanation:

  • {.} takes the input argument and removes its extension
  • -t prints out the commands being run to give us an idea of progress
  • --will-cite removes the request to cite the software if you publish results using it...

parallel is so convenient that I could even do a timestamp check myself:

xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs

Parallel linking was asked at: Can gcc use multiple cores when linking?

TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.

Tested in Ubuntu 18.10.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文