在什么情况下我应该在 C++ 中使用 memcpy 而不是标准运算符?

发布于 2024-10-09 17:33:07 字数 272 浏览 3 评论 0原文

我什么时候可以使用 memcpy 获得更好的性能,或者我如何从使用它中受益? 例如:

float a[3]; float b[3];

代码:

memcpy(a, b, 3*sizeof(float));

比这个更快吗?

a[0] = b[0];
a[1] = b[1];
a[2] = b[2];

When can I get better performance using memcpy or how do I benefit from using it?
For example:

float a[3]; float b[3];

is code:

memcpy(a, b, 3*sizeof(float));

faster than this one?

a[0] = b[0];
a[1] = b[1];
a[2] = b[2];

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

浪漫人生路 2024-10-16 17:33:07

效率不应该是您关心的问题。
编写干净、可维护的代码。

令我困扰的是,这么多答案表明 memcpy() 效率低下。它被设计为复制内存块(对于 C 程序)的最有效方法。

因此,我编写了以下内容作为测试:

#include <algorithm>

extern float a[3];
extern float b[3];
extern void base();

int main()
{
    base();

#if defined(M1)
    a[0] = b[0];
    a[1] = b[1];
    a[2] = b[2];
#elif defined(M2)
    memcpy(a, b, 3*sizeof(float));    
#elif defined(M3)
    std::copy(&a[0], &a[3], &b[0]);
 #endif

    base();
}

然后比较代码生成:

g++ -O3 -S xr.cpp -o s0.s
g++ -O3 -S xr.cpp -o s1.s -DM1
g++ -O3 -S xr.cpp -o s2.s -DM2
g++ -O3 -S xr.cpp -o s3.s -DM3

echo "=======" >  D
diff s0.s s1.s >> D
echo "=======" >> D
diff s0.s s2.s >> D
echo "=======" >> D
diff s0.s s3.s >> D

这导致:​​(手动添加的注释)

=======   // Copy by hand
10a11,18
>   movq    _a@GOTPCREL(%rip), %rcx
>   movq    _b@GOTPCREL(%rip), %rdx
>   movl    (%rdx), %eax
>   movl    %eax, (%rcx)
>   movl    4(%rdx), %eax
>   movl    %eax, 4(%rcx)
>   movl    8(%rdx), %eax
>   movl    %eax, 8(%rcx)

=======    // memcpy()
10a11,16
>   movq    _a@GOTPCREL(%rip), %rcx
>   movq    _b@GOTPCREL(%rip), %rdx
>   movq    (%rdx), %rax
>   movq    %rax, (%rcx)
>   movl    8(%rdx), %eax
>   movl    %eax, 8(%rcx)

=======    // std::copy()
10a11,14
>   movq    _a@GOTPCREL(%rip), %rsi
>   movl    $12, %edx
>   movq    _b@GOTPCREL(%rip), %rdi
>   call    _memmove

添加了在 1000000000 循环内运行上述内容的计时结果。

   g++ -c -O3 -DM1 X.cpp
   g++ -O3 X.o base.o -o m1
   g++ -c -O3 -DM2 X.cpp
   g++ -O3 X.o base.o -o m2
   g++ -c -O3 -DM3 X.cpp
   g++ -O3 X.o base.o -o m3
   time ./m1

   real 0m2.486s
   user 0m2.478s
   sys  0m0.005s
   time ./m2

   real 0m1.859s
   user 0m1.853s
   sys  0m0.004s
   time ./m3

   real 0m1.858s
   user 0m1.851s
   sys  0m0.006s

Efficiency should not be your concern.
Write clean maintainable code.

It bothers me that so many answers indicate that the memcpy() is inefficient. It is designed to be the most efficient way of copy blocks of memory (for C programs).

So I wrote the following as a test:

#include <algorithm>

extern float a[3];
extern float b[3];
extern void base();

int main()
{
    base();

#if defined(M1)
    a[0] = b[0];
    a[1] = b[1];
    a[2] = b[2];
#elif defined(M2)
    memcpy(a, b, 3*sizeof(float));    
#elif defined(M3)
    std::copy(&a[0], &a[3], &b[0]);
 #endif

    base();
}

Then to compare the code produces:

g++ -O3 -S xr.cpp -o s0.s
g++ -O3 -S xr.cpp -o s1.s -DM1
g++ -O3 -S xr.cpp -o s2.s -DM2
g++ -O3 -S xr.cpp -o s3.s -DM3

echo "=======" >  D
diff s0.s s1.s >> D
echo "=======" >> D
diff s0.s s2.s >> D
echo "=======" >> D
diff s0.s s3.s >> D

This resulted in: (comments added by hand)

=======   // Copy by hand
10a11,18
>   movq    _a@GOTPCREL(%rip), %rcx
>   movq    _b@GOTPCREL(%rip), %rdx
>   movl    (%rdx), %eax
>   movl    %eax, (%rcx)
>   movl    4(%rdx), %eax
>   movl    %eax, 4(%rcx)
>   movl    8(%rdx), %eax
>   movl    %eax, 8(%rcx)

=======    // memcpy()
10a11,16
>   movq    _a@GOTPCREL(%rip), %rcx
>   movq    _b@GOTPCREL(%rip), %rdx
>   movq    (%rdx), %rax
>   movq    %rax, (%rcx)
>   movl    8(%rdx), %eax
>   movl    %eax, 8(%rcx)

=======    // std::copy()
10a11,14
>   movq    _a@GOTPCREL(%rip), %rsi
>   movl    $12, %edx
>   movq    _b@GOTPCREL(%rip), %rdi
>   call    _memmove

Added Timing results for running the above inside a loop of 1000000000.

   g++ -c -O3 -DM1 X.cpp
   g++ -O3 X.o base.o -o m1
   g++ -c -O3 -DM2 X.cpp
   g++ -O3 X.o base.o -o m2
   g++ -c -O3 -DM3 X.cpp
   g++ -O3 X.o base.o -o m3
   time ./m1

   real 0m2.486s
   user 0m2.478s
   sys  0m0.005s
   time ./m2

   real 0m1.859s
   user 0m1.853s
   sys  0m0.004s
   time ./m3

   real 0m1.858s
   user 0m1.851s
   sys  0m0.006s
一口甜 2024-10-16 17:33:07

仅当您要复制的对象没有显式构造函数及其成员(所谓的 POD,“纯旧数据”)时,才可以使用 memcpy。因此,对于float调用memcpy是可以的,但是对于例如std::string来说则是错误的。

但部分工作已经为您完成: 中的 std::copy 专门用于内置类型(也可能适用于所有其他 POD-类型 - 取决于 STL 实现)。因此,编写 std::copy(a, a + 3, b)memcpy 一样快(编译器优化后),但更不容易出错。

You can use memcpy only if the objects you're copying have no explicit constructors, so as their members (so-called POD, "Plain Old Data"). So it is OK to call memcpy for float, but it is wrong for, e.g., std::string.

But part of the work has already been done for you: std::copy from <algorithm> is specialized for built-in types (and possibly for every other POD-type - depends on STL implementation). So writing std::copy(a, a + 3, b) is as fast (after compiler optimization) as memcpy, but is less error-prone.

束缚m 2024-10-16 17:33:07

编译器专门优化了 memcpy 调用,至少是 clang &海湾合作委员会确实如此。所以你应该尽可能选择它。

Compilers specifically optimize memcpy calls, at least clang & gcc does. So you should prefer it wherever you can.

时光瘦了 2024-10-16 17:33:07

使用 std::copy()。正如 g++ 的头文件所述:

只要有可能,这个内联函数就会归结为对 @c memmove 的调用。

也许,Visual Studio 的差别不大。按照正常的方式进行,一旦发现瓶颈就进行优化。对于简单副本,编译器可能已经为您进行了优化。

Use std::copy(). As the header file for g++ notes:

This inline function will boil down to a call to @c memmove whenever possible.

Probably, Visual Studio's is not much different. Go with the normal way, and optimize once you're aware of a bottle neck. In the case of a simple copy, the compiler is probably already optimizing for you.

人间☆小暴躁 2024-10-16 17:33:07

不要进行过早的微优化,例如像这样使用 memcpy。使用赋值更清晰且不易出错,任何像样的编译器都会生成适当有效的代码。当且仅当您分析了代码并发现分配是一个重大瓶颈时,您才可以考虑某种微优化,但一般来说,您应该始终首先编写清晰、健壮的代码。

Don't go for premature micro-optimisations such as using memcpy like this. Using assignment is clearer and less error-prone and any decent compiler will generate suitably efficient code. If, and only if, you have profiled the code and found the assignments to be a significant bottleneck then you can consider some kind of micro-optimisation, but in general you should always write clear, robust code in the first instance.

毅然前行 2024-10-16 17:33:07

memcpy 的好处?大概是可读性。否则,您将不得不进行多次赋值或使用 for 循环进行复制,这两者都不像仅仅执行 memcpy 那样简单明了(当然,只要您的类型简单并且不需要构造/破坏)。

此外,memcpy 通常针对特定平台进行了相对优化,以至于它不会比简单赋值慢很多,甚至可能更快。

The benefits of memcpy? Probably readability. Otherwise, you would have to either do a number of assignments or have a for loop for copying, neither of which are as simple and clear as just doing memcpy (of course, as long as your types are simple and don't require construction/destruction).

Also, memcpy is generally relatively optimized for specific platforms, to the point that it won't be all that much slower than simple assignment, and may even be faster.

零崎曲识 2024-10-16 17:33:07

据推测,正如 Nawaz 所说,作业版本在大多数平台上应该更快。这是因为 memcpy() 将逐字节复制,而第二个版本一次可以复制 4 个字节。

与往常一样,您应该始终对应用程序进行分析,以确保您期望的瓶颈与实际情况相符。

编辑
这同样适用于动态数组。既然您提到了 C++,那么在这种情况下您应该使用 std::copy() 算法。

编辑
这是使用 GCC 4.5.0 的 Windows XP 的代码输出,使用 -O3 标志编译:

extern "C" void cpy(float* d, float* s, size_t n)
{
    memcpy(d, s, sizeof(float)*n);
}

我已经完成了这个函数,因为 OP 也指定了动态数组。

输出汇编如下:

_cpy:
LFB393:
    pushl   %ebp
LCFI0:
    movl    %esp, %ebp
LCFI1:
    pushl   %edi
LCFI2:
    pushl   %esi
LCFI3:
    movl    8(%ebp), %eax
    movl    12(%ebp), %esi
    movl    16(%ebp), %ecx
    sall    $2, %ecx
    movl    %eax, %edi
    rep movsb
    popl    %esi
LCFI4:
    popl    %edi
LCFI5:
    leave
LCFI6:
    ret

当然,我假设这里所有的专家都知道rep movsb 的含义。

这是赋值版本:

extern "C" void cpy2(float* d, float* s, size_t n)
{
    while (n > 0) {
        d[n] = s[n];
        n--;
    }
}

产生以下代码:

_cpy2:
LFB394:
    pushl   %ebp
LCFI7:
    movl    %esp, %ebp
LCFI8:
    pushl   %ebx
LCFI9:
    movl    8(%ebp), %ebx
    movl    12(%ebp), %ecx
    movl    16(%ebp), %eax
    testl   %eax, %eax
    je  L2
    .p2align 2,,3
L5:
    movl    (%ecx,%eax,4), %edx
    movl    %edx, (%ebx,%eax,4)
    decl    %eax
    jne L5
L2:
    popl    %ebx
LCFI10:
    leave
LCFI11:
    ret

一次移动 4 个字节。

Supposedly, as Nawaz said, the assignment version should be faster on most platform. That's because memcpy() will copy byte by byte while the second version could copy 4 bytes at a time.

As it's always the case, you should always profile applications to be sure that what you expect to be the bottleneck matches the reality.

Edit
Same applies to dynamic array. Since you mention C++ you should use std::copy() algorithm in that case.

Edit
This is code output for Windows XP with GCC 4.5.0, compiled with -O3 flag:

extern "C" void cpy(float* d, float* s, size_t n)
{
    memcpy(d, s, sizeof(float)*n);
}

I have done this function because OP specified dynamic arrays too.

Output assembly is the following:

_cpy:
LFB393:
    pushl   %ebp
LCFI0:
    movl    %esp, %ebp
LCFI1:
    pushl   %edi
LCFI2:
    pushl   %esi
LCFI3:
    movl    8(%ebp), %eax
    movl    12(%ebp), %esi
    movl    16(%ebp), %ecx
    sall    $2, %ecx
    movl    %eax, %edi
    rep movsb
    popl    %esi
LCFI4:
    popl    %edi
LCFI5:
    leave
LCFI6:
    ret

of course, I assume all of the experts here knows what rep movsb means.

This is the assignment version:

extern "C" void cpy2(float* d, float* s, size_t n)
{
    while (n > 0) {
        d[n] = s[n];
        n--;
    }
}

which yields the following code:

_cpy2:
LFB394:
    pushl   %ebp
LCFI7:
    movl    %esp, %ebp
LCFI8:
    pushl   %ebx
LCFI9:
    movl    8(%ebp), %ebx
    movl    12(%ebp), %ecx
    movl    16(%ebp), %eax
    testl   %eax, %eax
    je  L2
    .p2align 2,,3
L5:
    movl    (%ecx,%eax,4), %edx
    movl    %edx, (%ebx,%eax,4)
    decl    %eax
    jne L5
L2:
    popl    %ebx
LCFI10:
    leave
LCFI11:
    ret

Which moves 4 bytes at a time.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文