用黄金代替 ld - 有什么经验吗?

发布于 2024-09-14 08:46:57 字数 541 浏览 3 评论 0 原文

有没有人尝试使用 gold 而不是 ld

gold 承诺ld<快得多/code>,因此它可能有助于加快大型 C++ 应用程序的测试周期,但它可以用作 ld 的直接替代品吗?

gcc/g++可以直接调用gold吗?

是否存在任何已知的错误或问题?

尽管gold 已经成为 GNU binutils 的一部分已经有一段时间了,但我在网络上几乎没有找到“成功案例”,甚至没有“Howtos”。

更新:添加了黄金链接和解释它的博客条目

Has anyone tried to use gold instead of ld?

gold promises to be much faster than ld, so it may help speeding up test cycles for large C++ applications, but can it be used as drop-in replacement for ld?

Can gcc/g++ directly call gold.?

Are there any know bugs or problems?

Although gold is part of the GNU binutils since a while, I have found almost no "success stories" or even "Howtos" in the Web.

(Update: added links to gold and blog entry explaining it)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

攒眉千度 2024-09-21 08:46:57

目前它正在 Ubuntu 10.04 上编译更大的项目。在这里,您可以轻松安装它并将其与 binutils-gold 软件包集成(如果删除该软件包,您将获得旧的 ld)。 Gcc 将自动使用黄金。

一些经验:

  • gold 不会在 /usr/local/lib 中搜索
  • gold 不假设像 pthread 或 rt 这样的库,必须手动添加它们,
  • 它更快并且需要更少的内存(后者是)对于具有大量 boost 等的大型 C++ 项目很重要)

什么不起作用:它无法编译内核内容,因此没有内核模块。如果 Ubuntu 更新了 fglrx 等专有驱动程序,则 Ubuntu 会通过 DKMS 自动执行此操作。 ld-gold 失败(您必须删除 gold,重新启动 DKMS,重新安装 ld-gold

At the moment it is compiling bigger projects on Ubuntu 10.04. Here you can install and integrate it easily with the binutils-gold package (if you remove that package, you get your old ld). Gcc will automatically use gold then.

Some experiences:

  • gold doesn't search in /usr/local/lib
  • gold doesn't assume libs like pthread or rt, had to add them by hand
  • it is faster and needs less memory (the later is important on big C++ projects with a lot of boost etc.)

What does not work: It cannot compile kernel stuff and therefore no kernel modules. Ubuntu does this automatically via DKMS if it updates proprietary drivers like fglrx. This fails with ld-gold (you have to remove gold, restart DKMS, reinstall ld-gold.

無心 2024-09-21 08:46:57

由于我花了一些时间才找到如何有选择地使用 gold(即不是在系统范围内使用符号链接),因此我将在这里发布解决方案。它基于 http://code.google.com/p/chromium/wiki/ LinuxFasterBuilds#Linking_using_gold

  1. 创建一个可以放置金胶脚本的目录。我正在使用~/bin/gold/
  2. 将以下胶水脚本放在那里并将其命名为~/bin/gold/ld

    <前><代码>#!/bin/bash
    黄金“$@”

    显然,使其可执行,chmod a+x ~/bin/gold/ld

  3. 将对 gcc 的调用更改为 gcc -B$HOME/bin/gold,这使得 gcc 在给定目录中查找 ld 这样的帮助程序 code>,因此使用glue脚本而不是系统默认的ld

As it took me a little while to find out how to selectively use gold (i.e. not system-wide using a symlink), I'll post the solution here. It's based on http://code.google.com/p/chromium/wiki/LinuxFasterBuilds#Linking_using_gold .

  1. Make a directory where you can put a gold glue script. I am using ~/bin/gold/.
  2. Put the following glue script there and name it ~/bin/gold/ld:

    #!/bin/bash
    gold "$@"
    

    Obviously, make it executable, chmod a+x ~/bin/gold/ld.

  3. Change your calls to gcc to gcc -B$HOME/bin/gold which makes gcc look in the given directory for helper programs like ld and thus uses the glue script instead of the system-default ld.

溺渁∝ 2024-09-21 08:46:57

gcc/g++可以直接调用gold吗?

只是为了补充答案:有一个 gcc 选项 -fuse-ld=gold (参见 gcc 文档)。不过,据我所知,可以在构建过程中配置 gcc,但该选项不会产生任何效果。

Can gcc/g++ directly call gold.?

Just to complement the answers: there is a gcc's option -fuse-ld=gold (see gcc doc). Though, AFAIK, it is possible to configure gcc during the build in a way that the option will not have any effect.

挽袖吟 2024-09-21 08:46:57

最低综合基准:LD vs 黄金 vs LLVM LLD

结果:

  • 黄金约为 3 倍使用 -Wl,--threads -Wl,--thread-count=$(nproc) 启用多线程
  • LLD 比黄金快大约 2 倍!

测试环境:

  • Ubuntu 20.04、GCC 9.3.0、binutils 2.34、sudo apt install lld LLD 10
  • Lenovo ThinkPad P51 笔记本电脑、Intel Core i7-7820HQ CPU(4 核/8 线程)、2x Samsung M471A2K43BB1- CRC RAM (2x 16GiB)、三星 MZVLB512HAJQ-000L7 SSD (3,000 MB/s)。

基准测试参数的简化描述:

  • 1:提供符号的目标文件数量
  • 2:每个符号提供程序目标文件的符号数量
  • 3:使用所有提供的符号的目标文件数量

不同基准参数的结果:

10000 10 10
nogold:  wall=4.35s user=3.45s system=0.88s 876820kB
gold:    wall=1.35s user=1.72s system=0.46s 739760kB
lld:     wall=0.73s user=1.20s system=0.24s 625208kB

1000 100 10
nogold:  wall=5.08s user=4.17s system=0.89s 924040kB
gold:    wall=1.57s user=2.18s system=0.54s 922712kB
lld:     wall=0.75s user=1.28s system=0.27s 664804kB

100 1000 10
nogold:  wall=5.53s user=4.53s system=0.95s 962440kB
gold:    wall=1.65s user=2.39s system=0.61s 987148kB
lld:     wall=0.75s user=1.30s system=0.25s 704820kB

10000 10 100
nogold:  wall=11.45s user=10.14s system=1.28s 1735224kB
gold:    wall=4.88s user=8.21s system=0.95s 2180432kB
lld:     wall=2.41s user=5.58s system=0.74s 2308672kB

1000 100 100
nogold:  wall=13.58s user=12.01s system=1.54s 1767832kB
gold:    wall=5.17s user=8.55s system=1.05s 2333432kB
lld:     wall=2.79s user=6.01s system=0.85s 2347664kB

100 1000 100
nogold:  wall=13.31s user=11.64s system=1.62s 1799664kB
gold:    wall=5.22s user=8.62s system=1.03s 2393516kB
lld:     wall=3.11s user=6.26s system=0.66s 2386392kB

这是生成所有符号的脚本用于链接测试的对象:

generate-objects

#!/usr/bin/env bash
set -eu

# CLI args.

# Each of those files contains n_ints_per_file ints.
n_int_files="${1:-10}"
n_ints_per_file="${2:-10}"

# Each function adds all ints from all files.
# This leads to n_int_files x n_ints_per_file x n_funcs relocations.
n_funcs="${3:-10}"

# Do a debug build, since it is for debug builds that link time matters the most,
# as the user will be recompiling often.
cflags='-ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic'

# Cleanup previous generated files objects.
./clean

# Generate i_*.c, ints.h and int_sum.h
rm -f ints.h
echo 'return' > int_sum.h
int_file_i=0
while [ "$int_file_i" -lt "$n_int_files" ]; do
  int_i=0
  int_file="${int_file_i}.c"
  rm -f "$int_file"
  while [ "$int_i" -lt "$n_ints_per_file" ]; do
    echo "${int_file_i} ${int_i}"
    int_sym="i_${int_file_i}_${int_i}"
    echo "unsigned int ${int_sym} = ${int_file_i};" >> "$int_file"
    echo "extern unsigned int ${int_sym};" >> ints.h
    echo "${int_sym} +" >> int_sum.h
    int_i=$((int_i + 1))
  done
  int_file_i=$((int_file_i + 1))
done
echo '1;' >> int_sum.h

# Generate funcs.h and main.c.
rm -f funcs.h
cat <<EOF >main.c
#include "funcs.h"

int main(void) {
return
EOF
i=0
while [ "$i" -lt "$n_funcs" ]; do
  func_sym="f_${i}"
  echo "${func_sym}() +" >> main.c
  echo "int ${func_sym}(void);" >> funcs.h
  cat <<EOF >"${func_sym}.c"
#include "ints.h"

int ${func_sym}(void) {
#include "int_sum.h"
}
EOF
  i=$((i + 1))
done
cat <<EOF >>main.c
1;
}
EOF

# Generate *.o
ls | grep -E '\.c

GitHub 上游

请注意,目标文件的生成可能会非常慢,因为每个 C 文件都可能非常大。

给定类型的输入:

./generate-objects [n_int_files [n_ints_per_file [n_funcs]]]

它生成:

main.c

#include "funcs.h"

int main(void) {
    return f_0() + f_1() + ... + f_<n_funcs>();
}

f_0.c, f_1.c, ..., f_.c

extern unsigned int i_0_0;
extern unsigned int i_0_1;
...
extern unsigned int i_1_0;
extern unsigned int i_1_1;
...
extern unsigned int i_<n_int_files>_<n_ints_per_file>;

int f_0(void) {
    return
    i_0_0 +
    i_0_1 +
    ...
    i_1_0 +
    i_1_1 +
    ...
    i_<n_int_files>_<n_ints_per_file>
}

0.c1.c、...、.c

unsigned int i_0_0 = 0;
unsigned int i_0_1 = 0;
...
unsigned int i_0_<n_ints_per_file> = 0;

导致:

n_int_files x n_ints_per_file x n_funcs

重定位

然后我比较了:

gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic               -o main *.o
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -fuse-ld=gold -Wl,--threads -Wl,--thread-count=`nproc` -o main *.o
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -fuse-ld=lld  -o main *.o

在选择测试参数时我一直试图减轻的一些限制:

  • 在 100k C 文件时,两种方法都会偶尔失败 malloc
  • GCC 无法编译具有 1M 添加的函数

我还在 gem5 的调试版本中观察到了 2x :https://gem5.googlesource.com/public/gem5/+/fafe4e80b76e93e3d0d05797904c199285 87f5b5

类似的问题:https://unix.stackexchange.com/questions/545699/what- is-the-gold-linker

Phoronix 基准测试

Phoronix 在 2017 年对一些现实世界的项目做了一些基准测试,但对于他们检查的项目来说,黄金收益并没有那么显着:https://www.phoronix.com/scan。 php?page=article&item=lld4-linux-tests&num=2 (存档)。

已知的不兼容性

/issues/109 我的调试符号在LLD基准测试

位于https: //lld.llvm.org/ 他们给出了一些知名项目的构建时间。与我的综合基准结果相似。不幸的是,没有给出项目/链接器版本。在他们的结果中:

  • 黄金比 LD 快 3 倍/4 倍
  • LLD 比黄金快 3 倍/4 倍,因此比我的合成基准测试有更大的加速

他们评论道:

这是在配备 SSD 驱动器的 2 插槽 20 核 40 线程 Xeon E5-2680 2.80 GHz 计算机上的链接时间比较。我们在有或没有多线程支持的情况下运行了 gold 和 lld。为了禁用多线程,我们在命令行中添加了 -no-threads。

结果如下:

Program      | Size     | GNU ld  | gold -j1 | gold    | lld -j1 |    lld
-------------|----------|---------|----------|---------|---------|-------
  ffmpeg dbg |   92 MiB |   1.72s |   1.16s  |   1.01s |   0.60s |  0.35s
  mysqld dbg |  154 MiB |   8.50s |   2.96s  |   2.68s |   1.06s |  0.68s
   clang dbg | 1.67 GiB | 104.03s |  34.18s  |  23.49s |  14.82s |  5.28s
chromium dbg | 1.14 GiB | 209.05s |  64.70s  |  60.82s |  27.60s | 16.70s
| parallel --halt now,fail=1 -t --will-cite "gcc $cflags -c -o '{.}.o' '{}'"

GitHub 上游

请注意,目标文件的生成可能会非常慢,因为每个 C 文件都可能非常大。

给定类型的输入:


它生成:

main.c


f_0.c, f_1.c, ..., f_.c


0.c1.c、...、.c


导致:


重定位

然后我比较了:


在选择测试参数时我一直试图减轻的一些限制:

  • 在 100k C 文件时,两种方法都会偶尔失败 malloc
  • GCC 无法编译具有 1M 添加的函数

我还在 gem5 的调试版本中观察到了 2x :https://gem5.googlesource.com/public/gem5/+/fafe4e80b76e93e3d0d05797904c199285 87f5b5

类似的问题:https://unix.stackexchange.com/questions/545699/what- is-the-gold-linker

Phoronix 基准测试

Phoronix 在 2017 年对一些现实世界的项目做了一些基准测试,但对于他们检查的项目来说,黄金收益并没有那么显着:https://www.phoronix.com/scan。 php?page=article&item=lld4-linux-tests&num=2 (存档)。

已知的不兼容性

/issues/109 我的调试符号在LLD基准测试

位于https: //lld.llvm.org/ 他们给出了一些知名项目的构建时间。与我的综合基准结果相似。不幸的是,没有给出项目/链接器版本。在他们的结果中:

  • 黄金比 LD 快 3 倍/4 倍
  • LLD 比黄金快 3 倍/4 倍,因此比我的合成基准测试有更大的加速

他们评论道:

这是在配备 SSD 驱动器的 2 插槽 20 核 40 线程 Xeon E5-2680 2.80 GHz 计算机上的链接时间比较。我们在有或没有多线程支持的情况下运行了 gold 和 lld。为了禁用多线程,我们在命令行中添加了 -no-threads。

结果如下:


Minimal synthetic benchmark: LD vs gold vs LLVM LLD

Outcome:

  • gold was about 3x to 4x faster for all values I've tried when using -Wl,--threads -Wl,--thread-count=$(nproc) to enable multithreading
  • LLD was about 2x faster than gold!

Tested on:

  • Ubuntu 20.04, GCC 9.3.0, binutils 2.34, sudo apt install lld LLD 10
  • Lenovo ThinkPad P51 laptop, Intel Core i7-7820HQ CPU (4 cores / 8 threads), 2x Samsung M471A2K43BB1-CRC RAM (2x 16GiB), Samsung MZVLB512HAJQ-000L7 SSD (3,000 MB/s).

Simplified description of the benchmark parameters:

  • 1: number of object files providing symbols
  • 2: number of symbols per symbol provider object file
  • 3: number of object files using all provided symbols symbols

Results for different benchmark parameters:

10000 10 10
nogold:  wall=4.35s user=3.45s system=0.88s 876820kB
gold:    wall=1.35s user=1.72s system=0.46s 739760kB
lld:     wall=0.73s user=1.20s system=0.24s 625208kB

1000 100 10
nogold:  wall=5.08s user=4.17s system=0.89s 924040kB
gold:    wall=1.57s user=2.18s system=0.54s 922712kB
lld:     wall=0.75s user=1.28s system=0.27s 664804kB

100 1000 10
nogold:  wall=5.53s user=4.53s system=0.95s 962440kB
gold:    wall=1.65s user=2.39s system=0.61s 987148kB
lld:     wall=0.75s user=1.30s system=0.25s 704820kB

10000 10 100
nogold:  wall=11.45s user=10.14s system=1.28s 1735224kB
gold:    wall=4.88s user=8.21s system=0.95s 2180432kB
lld:     wall=2.41s user=5.58s system=0.74s 2308672kB

1000 100 100
nogold:  wall=13.58s user=12.01s system=1.54s 1767832kB
gold:    wall=5.17s user=8.55s system=1.05s 2333432kB
lld:     wall=2.79s user=6.01s system=0.85s 2347664kB

100 1000 100
nogold:  wall=13.31s user=11.64s system=1.62s 1799664kB
gold:    wall=5.22s user=8.62s system=1.03s 2393516kB
lld:     wall=3.11s user=6.26s system=0.66s 2386392kB

This is the script that generates all the objects for the link tests:

generate-objects

#!/usr/bin/env bash
set -eu

# CLI args.

# Each of those files contains n_ints_per_file ints.
n_int_files="${1:-10}"
n_ints_per_file="${2:-10}"

# Each function adds all ints from all files.
# This leads to n_int_files x n_ints_per_file x n_funcs relocations.
n_funcs="${3:-10}"

# Do a debug build, since it is for debug builds that link time matters the most,
# as the user will be recompiling often.
cflags='-ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic'

# Cleanup previous generated files objects.
./clean

# Generate i_*.c, ints.h and int_sum.h
rm -f ints.h
echo 'return' > int_sum.h
int_file_i=0
while [ "$int_file_i" -lt "$n_int_files" ]; do
  int_i=0
  int_file="${int_file_i}.c"
  rm -f "$int_file"
  while [ "$int_i" -lt "$n_ints_per_file" ]; do
    echo "${int_file_i} ${int_i}"
    int_sym="i_${int_file_i}_${int_i}"
    echo "unsigned int ${int_sym} = ${int_file_i};" >> "$int_file"
    echo "extern unsigned int ${int_sym};" >> ints.h
    echo "${int_sym} +" >> int_sum.h
    int_i=$((int_i + 1))
  done
  int_file_i=$((int_file_i + 1))
done
echo '1;' >> int_sum.h

# Generate funcs.h and main.c.
rm -f funcs.h
cat <<EOF >main.c
#include "funcs.h"

int main(void) {
return
EOF
i=0
while [ "$i" -lt "$n_funcs" ]; do
  func_sym="f_${i}"
  echo "${func_sym}() +" >> main.c
  echo "int ${func_sym}(void);" >> funcs.h
  cat <<EOF >"${func_sym}.c"
#include "ints.h"

int ${func_sym}(void) {
#include "int_sum.h"
}
EOF
  i=$((i + 1))
done
cat <<EOF >>main.c
1;
}
EOF

# Generate *.o
ls | grep -E '\.c

GitHub upstream.

Note that the object file generation can be quite slow, since each C file can be quite large.

Given an input of type:

./generate-objects [n_int_files [n_ints_per_file [n_funcs]]]

it generates:

main.c

#include "funcs.h"

int main(void) {
    return f_0() + f_1() + ... + f_<n_funcs>();
}

f_0.c, f_1.c, ..., f_<n_funcs>.c

extern unsigned int i_0_0;
extern unsigned int i_0_1;
...
extern unsigned int i_1_0;
extern unsigned int i_1_1;
...
extern unsigned int i_<n_int_files>_<n_ints_per_file>;

int f_0(void) {
    return
    i_0_0 +
    i_0_1 +
    ...
    i_1_0 +
    i_1_1 +
    ...
    i_<n_int_files>_<n_ints_per_file>
}

0.c, 1.c, ..., <n_int_files>.c

unsigned int i_0_0 = 0;
unsigned int i_0_1 = 0;
...
unsigned int i_0_<n_ints_per_file> = 0;

which leads to:

n_int_files x n_ints_per_file x n_funcs

relocations on the link.

Then I compared:

gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic               -o main *.o
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -fuse-ld=gold -Wl,--threads -Wl,--thread-count=`nproc` -o main *.o
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -fuse-ld=lld  -o main *.o

Some limits I've been trying to mitigate when selecting the test parameters:

  • at 100k C files, both methods get failed mallocs occasionally
  • GCC cannot compile a function with 1M additions

I have also observed a 2x in the debug build of gem5: https://gem5.googlesource.com/public/gem5/+/fafe4e80b76e93e3d0d05797904c19928587f5b5

Similar question: https://unix.stackexchange.com/questions/545699/what-is-the-gold-linker

Phoronix benchmarks

Phoronix did some benchmarking in 2017 for some real world projects, but for the projects they examined, the gold gains were not so significant: https://www.phoronix.com/scan.php?page=article&item=lld4-linux-tests&num=2 (archive).

Known incompatibilities

LLD benchmarks

At https://lld.llvm.org/ they give build times for a few well known projects. with similar results to my synthetic benchmarks. Project/linker versions are not given unfortunately. In their results:

  • gold was about 3x/4x faster than LD
  • LLD was 3x/4x faster than gold, so a greater speedup than in my synthetic benchmark

They comment:

This is a link time comparison on a 2-socket 20-core 40-thread Xeon E5-2680 2.80 GHz machine with an SSD drive. We ran gold and lld with or without multi-threading support. To disable multi-threading, we added -no-threads to the command lines.

and results look like:

Program      | Size     | GNU ld  | gold -j1 | gold    | lld -j1 |    lld
-------------|----------|---------|----------|---------|---------|-------
  ffmpeg dbg |   92 MiB |   1.72s |   1.16s  |   1.01s |   0.60s |  0.35s
  mysqld dbg |  154 MiB |   8.50s |   2.96s  |   2.68s |   1.06s |  0.68s
   clang dbg | 1.67 GiB | 104.03s |  34.18s  |  23.49s |  14.82s |  5.28s
chromium dbg | 1.14 GiB | 209.05s |  64.70s  |  60.82s |  27.60s | 16.70s
| parallel --halt now,fail=1 -t --will-cite "gcc $cflags -c -o '{.}.o' '{}'"

GitHub upstream.

Note that the object file generation can be quite slow, since each C file can be quite large.

Given an input of type:


it generates:

main.c


f_0.c, f_1.c, ..., f_<n_funcs>.c


0.c, 1.c, ..., <n_int_files>.c


which leads to:


relocations on the link.

Then I compared:


Some limits I've been trying to mitigate when selecting the test parameters:

  • at 100k C files, both methods get failed mallocs occasionally
  • GCC cannot compile a function with 1M additions

I have also observed a 2x in the debug build of gem5: https://gem5.googlesource.com/public/gem5/+/fafe4e80b76e93e3d0d05797904c19928587f5b5

Similar question: https://unix.stackexchange.com/questions/545699/what-is-the-gold-linker

Phoronix benchmarks

Phoronix did some benchmarking in 2017 for some real world projects, but for the projects they examined, the gold gains were not so significant: https://www.phoronix.com/scan.php?page=article&item=lld4-linux-tests&num=2 (archive).

Known incompatibilities

LLD benchmarks

At https://lld.llvm.org/ they give build times for a few well known projects. with similar results to my synthetic benchmarks. Project/linker versions are not given unfortunately. In their results:

  • gold was about 3x/4x faster than LD
  • LLD was 3x/4x faster than gold, so a greater speedup than in my synthetic benchmark

They comment:

This is a link time comparison on a 2-socket 20-core 40-thread Xeon E5-2680 2.80 GHz machine with an SSD drive. We ran gold and lld with or without multi-threading support. To disable multi-threading, we added -no-threads to the command lines.

and results look like:



    
み格子的夏天 2024-09-21 08:46:57

作为一名 Samba 开发人员,几年来我几乎只在 Ubuntu、Debian 和 Fedora 上使用 gold 链接器。我的评估:

  • gold 比经典链接器快很多倍(感觉:5-10 倍)。
  • 最初存在一些问题,但大约从 Ubuntu 12.04 左右开始这些问题就消失了。
  • 黄金链接器甚至在我们的代码中发现了一些依赖性问题,因为它在某些细节方面似乎比经典链接器更正确。请参阅此 Samba 提交

我没有选择性地使用黄金,但一直在使用符号链接或替代机制(如果发行版提供)。

As a Samba developer, I have been using the gold linker almost exclusively on Ubuntu, Debian, and Fedora since several years now. My assessment:

  • gold is many times (felt: 5-10 times) faster than the classical linker.
  • Initially, there were a few problems, but they have gone since roughly around Ubuntu 12.04.
  • The gold linker even found some dependency problems in our code, since it seems to be more correct than the classical one with respect to some details. See, e.g. this Samba commit.

I have not used gold selectively, but have been using symlinks or the alternatives mechanism if the distribution provides it.

爱她像谁 2024-09-21 08:46:57

您可以将 ld 链接到 gold(如果您安装了 ld 以避免覆盖,则在本地二进制目录中):

ln -s `which gold` ~/bin/ld

或者

ln -s `which gold` /usr/local/bin/ld

You could link ld to gold (in a local binary directory if you have ld installed to avoid overwriting):

ln -s `which gold` ~/bin/ld

or

ln -s `which gold` /usr/local/bin/ld
遥远的她 2024-09-21 08:46:57

有些项目似乎与gold不兼容,因为ld和gold之间存在一些不兼容的差异。示例:OpenFOAM,请参阅 http://www.openfoam.org/mantisbt/view。 php?id=685

Some projects seem to be incompatible with gold, because of some incompatible differences between ld and gold. Example: OpenFOAM, see http://www.openfoam.org/mantisbt/view.php?id=685 .

娇女薄笑 2024-09-21 08:46:57

DragonFlyBSD 切换到 gold 作为默认链接器。所以它似乎已经为各种工具做好了准备。
更多详情:
http://phoronix.com/scan.php?page=news_item& ;px=DragonFlyBSD-Gold-Linker

DragonFlyBSD switched over to gold as their default linker. So it seems to be ready for a variety of tools.
More details:
http://phoronix.com/scan.php?page=news_item&px=DragonFlyBSD-Gold-Linker

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文