如何让 ICC 编译器在内循环中生成 SSE 指令?

发布于 2024-11-18 21:46:13 字数 227 浏览 3 评论 0原文

我有一个像这样的内部循环

for(i=0 ;i<n;i++){
 x[0] += A[i] * z[0];
 x[1] += A[i] * z[1];
 x[2] += A[i] * z[2];
 x[3] += A[i] * z[3];
}

内部4条指令可以很容易地被编译器转换为SSE指令。当前的编译器会这样做吗?如果他们这样做,我必须做什么来强制编译器这样做?

I have an inner loop such as this

for(i=0 ;i<n;i++){
 x[0] += A[i] * z[0];
 x[1] += A[i] * z[1];
 x[2] += A[i] * z[2];
 x[3] += A[i] * z[3];
}

The inner 4 instructions can be easily converted to SSE instructions by a compiler. Do current compilers do this ? If they do what do I have to do to force this on the compiler?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

ζ澈沫 2024-11-25 21:46:13

根据您提供的内容,这无法矢量化,因为指针可能彼此别名,即 x 数组可能与 Az< /代码>。

帮助编译器解决问题的一个简单方法是将x声明为__restrict。另一种方法是像这样重写它:

for(i=0 ;i<n;i++)
{
 float Ai=A[i];
 float z0=z[0], z1=z[1], z2=z[2], z3=z[3];
 x[0] += Ai * z0;
 x[1] += Ai * z1;
 x[2] += Ai * z2;
 x[3] += Ai * z3;
}

我从未真正尝试过让编译器自动向量化代码,所以我不知道这是否会做到这一点。即使它没有被矢量化,它也应该更快,因为可以更有效地排序加载和存储,并且不会导致加载命中存储。

如果您拥有比编译器更多的信息(例如,您的指针是否是 16 字节对齐的),并且应该能够利用它来发挥您的优势(例如,使用对齐加载)。请注意,我并不是说您应该始终尝试击败编译器,只有当您知道的比编译器多时。

进一步阅读:

From what you've provided, this can't be vectorized, because the pointers could alias each other, i.e. the x array could overlap with A or z.

A simple way to help the compiler out would be to declare x as __restrict. Another way would be to rewrite it like so:

for(i=0 ;i<n;i++)
{
 float Ai=A[i];
 float z0=z[0], z1=z[1], z2=z[2], z3=z[3];
 x[0] += Ai * z0;
 x[1] += Ai * z1;
 x[2] += Ai * z2;
 x[3] += Ai * z3;
}

I've never actually tried to get a compiler to auto-vectorize code, so I don't know if that will do it or not. Even if it doesn't get vectorized, it should be faster since the loads and stores can be ordered more efficiently and without causing a load-hit-store.

If you have more information than the compiler does, (e.g. whether or not your pointers are 16-byte aligned), and should be able to use that to your advantage (e.g. using aligned loads). Note that I'm not saying you should always try to beat the compiler, only when you know more than it does.

Further reading:

温柔戏命师 2024-11-25 21:46:13

默认情况下,ICC 自动向量化 SSE2 的以下代码片段:

void foo(float *__restrict__ x, float *__restrict__ A, float *__restrict__ z, int n){
for(int i=0;i<n;i++){
 x[0] += A[i] * z[0];
 x[1] += A[i] * z[1];
 x[2] += A[i] * z[2];
 x[3] += A[i] * z[3];
}
return;
}

通过使用 restrict 关键字,将忽略内存别名假设。生成的矢量化报告为:

$ icpc test.cc -c -vec-report2 -S
test.cc(2): (col. 1) remark: PERMUTED LOOP WAS VECTORIZED
test.cc(3): (col. 2) remark: loop was not vectorized: not inner loop

确认是否生成了SSE指令,打开生成的ASM(test.s),会发现如下指令:

..B1.13:                        # Preds ..B1.13 ..B1.12
        movaps    (%rsi,%r15,4), %xmm10                         #3.10
        movaps    16(%rsi,%r15,4), %xmm11                       #3.10
        mulps     %xmm0, %xmm10                                 #3.17
        mulps     %xmm0, %xmm11                                 #3.17
        addps     %xmm10, %xmm9                                 #3.2
        addps     %xmm11, %xmm6                                 #3.2
        movaps    32(%rsi,%r15,4), %xmm12                       #3.10
        movaps    48(%rsi,%r15,4), %xmm13                       #3.10
        movaps    64(%rsi,%r15,4), %xmm14                       #3.10
        movaps    80(%rsi,%r15,4), %xmm15                       #3.10
        movaps    96(%rsi,%r15,4), %xmm10                       #3.10
        movaps    112(%rsi,%r15,4), %xmm11                      #3.10
        addq      $32, %r15                                     #2.1
        mulps     %xmm0, %xmm12                                 #3.17
        cmpq      %r13, %r15                                    #2.1
        mulps     %xmm0, %xmm13                                 #3.17
        mulps     %xmm0, %xmm14                                 #3.17
        addps     %xmm12, %xmm5                                 #3.2
        mulps     %xmm0, %xmm15                                 #3.17
        addps     %xmm13, %xmm4                                 #3.2
        mulps     %xmm0, %xmm10                                 #3.17
        addps     %xmm14, %xmm7                                 #3.2
        mulps     %xmm0, %xmm11                                 #3.17
        addps     %xmm15, %xmm3                                 #3.2
        addps     %xmm10, %xmm2                                 #3.2
        addps     %xmm11, %xmm1                                 #3.2
        jb        ..B1.13       # Prob 75%                      #2.1
                                # LOE rax rdx rsi r8 r9 r10 r13 r15 ecx ebp edi r11d r14d bl xmm0 xmm1 xmm2 xmm3 xmm4 xmm5 xmm6 xmm7 xmm8 xmm9
..B1.14:                        # Preds ..B1.13
        addps     %xmm6, %xmm9                                  #3.2
        addps     %xmm4, %xmm5                                  #3.2
        addps     %xmm3, %xmm7                                  #3.2
        addps     %xmm1, %xmm2                                  #3.2
        addps     %xmm5, %xmm9                                  #3.2
        addps     %xmm2, %xmm7                                  #3.2
        lea       1(%r14), %r12d                                #2.1
        cmpl      %r12d, %ecx                                   #2.1
        addps     %xmm7, %xmm9                                  #3.2
        jb        ..B1.25       # Prob 50%                      #2.1

ICC auto-vectorizes the below code snippet for SSE2 by default:

void foo(float *__restrict__ x, float *__restrict__ A, float *__restrict__ z, int n){
for(int i=0;i<n;i++){
 x[0] += A[i] * z[0];
 x[1] += A[i] * z[1];
 x[2] += A[i] * z[2];
 x[3] += A[i] * z[3];
}
return;
}

By using restrict keyword, the memory aliasing assumption is ignored. The vectorization report generated is:

$ icpc test.cc -c -vec-report2 -S
test.cc(2): (col. 1) remark: PERMUTED LOOP WAS VECTORIZED
test.cc(3): (col. 2) remark: loop was not vectorized: not inner loop

To confirm if SSE instructions are generated, open the ASM generated (test.s) and you will find the following instructions:

..B1.13:                        # Preds ..B1.13 ..B1.12
        movaps    (%rsi,%r15,4), %xmm10                         #3.10
        movaps    16(%rsi,%r15,4), %xmm11                       #3.10
        mulps     %xmm0, %xmm10                                 #3.17
        mulps     %xmm0, %xmm11                                 #3.17
        addps     %xmm10, %xmm9                                 #3.2
        addps     %xmm11, %xmm6                                 #3.2
        movaps    32(%rsi,%r15,4), %xmm12                       #3.10
        movaps    48(%rsi,%r15,4), %xmm13                       #3.10
        movaps    64(%rsi,%r15,4), %xmm14                       #3.10
        movaps    80(%rsi,%r15,4), %xmm15                       #3.10
        movaps    96(%rsi,%r15,4), %xmm10                       #3.10
        movaps    112(%rsi,%r15,4), %xmm11                      #3.10
        addq      $32, %r15                                     #2.1
        mulps     %xmm0, %xmm12                                 #3.17
        cmpq      %r13, %r15                                    #2.1
        mulps     %xmm0, %xmm13                                 #3.17
        mulps     %xmm0, %xmm14                                 #3.17
        addps     %xmm12, %xmm5                                 #3.2
        mulps     %xmm0, %xmm15                                 #3.17
        addps     %xmm13, %xmm4                                 #3.2
        mulps     %xmm0, %xmm10                                 #3.17
        addps     %xmm14, %xmm7                                 #3.2
        mulps     %xmm0, %xmm11                                 #3.17
        addps     %xmm15, %xmm3                                 #3.2
        addps     %xmm10, %xmm2                                 #3.2
        addps     %xmm11, %xmm1                                 #3.2
        jb        ..B1.13       # Prob 75%                      #2.1
                                # LOE rax rdx rsi r8 r9 r10 r13 r15 ecx ebp edi r11d r14d bl xmm0 xmm1 xmm2 xmm3 xmm4 xmm5 xmm6 xmm7 xmm8 xmm9
..B1.14:                        # Preds ..B1.13
        addps     %xmm6, %xmm9                                  #3.2
        addps     %xmm4, %xmm5                                  #3.2
        addps     %xmm3, %xmm7                                  #3.2
        addps     %xmm1, %xmm2                                  #3.2
        addps     %xmm5, %xmm9                                  #3.2
        addps     %xmm2, %xmm7                                  #3.2
        lea       1(%r14), %r12d                                #2.1
        cmpl      %r12d, %ecx                                   #2.1
        addps     %xmm7, %xmm9                                  #3.2
        jb        ..B1.25       # Prob 50%                      #2.1
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文