CUDA:为 sm_20 显示错误的 lmem 统计信息?
当 sm_20
GPU 时,使用选项 --ptxas-options=-v
编译的 CUDA 内核似乎显示错误的 lmem(本地内存) 统计信息架构已指定。这同样为 sm_10 / sm_11 / sm_12 / sm_13
架构提供了有意义的 lmem 统计信息。
有人可以澄清 sm_20 lmem 统计数据是否需要以不同的方式读取或者它们完全错误?
这是内核:
__global__ void fooKernel( int* dResult )
{
const int num = 1000;
int val[num];
for ( int i = 0; i < num; ++i )
val[i] = i * i;
int result = 0;
for ( int i = 0; i < num; ++i )
result += val[i];
*dResult = result;
return;
}
--ptxas-options=-v
和 sm_20
报告:
1>ptxas info : Compiling entry function '_Z9fooKernelPi' for 'sm_20'
1>ptxas info : Used 5 registers, 4+0 bytes lmem, 36 bytes cmem[0]
--ptxas-options=-v
和 sm_10 / sm_11 / sm_12 / sm_13
报告:
1>ptxas info : Compiling entry function '_Z9fooKernelPi' for 'sm_10'
1>ptxas info : Used 3 registers, 4000+0 bytes lmem, 4+16 bytes smem, 4 bytes cmem[1]
sm_20报告4字节的lmem,如果您看到内核中使用的4x1000字节数组,这是根本不可能的。较旧的 GPU 架构报告正确的 4000 字节 lmem 统计数据。
这是在 CUDA 3.2 上尝试过的。我参考了NVCC手册(v3.2)的打印代码生成统计信息部分,但这无助于解释这种异常现象。
A CUDA kernel compiled with the option --ptxas-options=-v
seems to be displaying erroneous lmem (local memory) statistics when sm_20
GPU architecture is specified. The same gives meaningful lmem statistics with sm_10 / sm_11 / sm_12 / sm_13
architectures.
Can someone clarify if the sm_20 lmem statistics need to be read differently or they are plain wrong?
Here is the kernel:
__global__ void fooKernel( int* dResult )
{
const int num = 1000;
int val[num];
for ( int i = 0; i < num; ++i )
val[i] = i * i;
int result = 0;
for ( int i = 0; i < num; ++i )
result += val[i];
*dResult = result;
return;
}
--ptxas-options=-v
and sm_20
report:
1>ptxas info : Compiling entry function '_Z9fooKernelPi' for 'sm_20'
1>ptxas info : Used 5 registers, 4+0 bytes lmem, 36 bytes cmem[0]
--ptxas-options=-v
and sm_10 / sm_11 / sm_12 / sm_13
report:
1>ptxas info : Compiling entry function '_Z9fooKernelPi' for 'sm_10'
1>ptxas info : Used 3 registers, 4000+0 bytes lmem, 4+16 bytes smem, 4 bytes cmem[1]
sm_20 reports a lmem of 4 bytes, which is simply not possible if you see the 4x1000 byte array being used in the kernel. The older GPU architectures report the correct 4000 byte lmem statistic.
This was tried with CUDA 3.2. I have referred to the Printing Code Generation Statistics section of the NVCC manual (v3.2), but it does not help explain this anomaly.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
编译器是正确的。通过巧妙的优化,数组不需要存储。您所做的本质上是计算
result += i * i
而无需将临时数据存储到val
。查看生成的 ptx 代码不会显示 sm_10 与 sm_20 的任何差异。使用 decuda 反编译生成的 cubins 将揭示优化结果。
顺便说一句:尽量避免本地内存!它和全局内存一样慢。
The compiler is correct. Through clever optimization the array doesn't need to be stored. What you do is essentially calculating
result += i * i
without ever storing temporaries toval
.A look at the generated ptx code won't show any differences for sm_10 vs. sm_20. Decompiling the generated cubins with decuda will reveal the optimization.
BTW: Try to avoid local memory! It is as slow as global memory.