为什么将共享内存数组填充一列可以使内核速度提高 40%?

发布于 2024-11-28 23:00:10 字数 3177 浏览 1 评论 0原文

当共享内存数组填充一列时,为什么这个矩阵转置内核更快?

我在 PyCuda/Examples/MatrixTranspose 找到了内核。

import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy

block_size = 16    

def _get_transpose_kernel(offset):
    mod = SourceModule("""
    #define BLOCK_SIZE %(block_size)d
    #define A_BLOCK_STRIDE (BLOCK_SIZE * a_width)
    #define A_T_BLOCK_STRIDE (BLOCK_SIZE * a_height)

    __global__ void transpose(float *A_t, float *A, int a_width, int a_height)
    {
        // Base indices in A and A_t
        int base_idx_a   = blockIdx.x * BLOCK_SIZE +
        blockIdx.y * A_BLOCK_STRIDE;
        int base_idx_a_t = blockIdx.y * BLOCK_SIZE +
        blockIdx.x * A_T_BLOCK_STRIDE;

        // Global indices in A and A_t
        int glob_idx_a   = base_idx_a + threadIdx.x + a_width * threadIdx.y;
        int glob_idx_a_t = base_idx_a_t + threadIdx.x + a_height * threadIdx.y;

        /** why does the +1 offset make the kernel faster? **/
        __shared__ float A_shared[BLOCK_SIZE][BLOCK_SIZE+%(offset)d]; 

        // Store transposed submatrix to shared memory
        A_shared[threadIdx.y][threadIdx.x] = A[glob_idx_a];

        __syncthreads();

        // Write transposed submatrix to global memory
        A_t[glob_idx_a_t] = A_shared[threadIdx.x][threadIdx.y];
    }
    """% {"block_size": block_size, "offset": offset})

    kernel = mod.get_function("transpose")
    kernel.prepare("PPii", block=(block_size, block_size, 1))
    return kernel 


def transpose(tgt, src,offset):
    krnl = _get_transpose_kernel(offset)
    w, h = src.shape
    assert tgt.shape == (h, w)
    assert w % block_size == 0
    assert h % block_size == 0
    krnl.prepared_call((w / block_size, h /block_size), tgt.gpudata, src.gpudata, w, h)


def run_benchmark():
    from pycuda.curandom import rand
    print pycuda.autoinit.device.name()
    print "time\tGB/s\tsize\toffset\t"
    for offset in [0,1]:
        for size in [2048,2112]:
        
            source = rand((size, size), dtype=numpy.float32)
            target = gpuarray.empty((size, size), dtype=source.dtype)

            start = pycuda.driver.Event()
            stop = pycuda.driver.Event()

            warmup = 2
            for i in range(warmup):
                transpose(target, source,offset)

            pycuda.driver.Context.synchronize()
            start.record()

            count = 10          
            for i in range(count):
                transpose(target, source,offset)

            stop.record()
            stop.synchronize()

            elapsed_seconds = stop.time_since(start)*1e-3
            mem_bw = source.nbytes / elapsed_seconds * 2 * count /1024/1024/1024
            
            print "%6.4fs\t%6.4f\t%i\t%i" %(elapsed_seconds,mem_bw,size,offset)


run_benchmark()

输出

Quadro FX 580
time    GB/s    size    offset  
0.0802s 3.8949  2048    0
0.0829s 4.0105  2112    0
0.0651s 4.7984  2048    1
0.0595s 5.5816  2112    1

采用代码

Why is this matrix transpose kernel faster, when the shared memory array is padded by one column?

I found the kernel at PyCuda/Examples/MatrixTranspose.

Source

import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy

block_size = 16    

def _get_transpose_kernel(offset):
    mod = SourceModule("""
    #define BLOCK_SIZE %(block_size)d
    #define A_BLOCK_STRIDE (BLOCK_SIZE * a_width)
    #define A_T_BLOCK_STRIDE (BLOCK_SIZE * a_height)

    __global__ void transpose(float *A_t, float *A, int a_width, int a_height)
    {
        // Base indices in A and A_t
        int base_idx_a   = blockIdx.x * BLOCK_SIZE +
        blockIdx.y * A_BLOCK_STRIDE;
        int base_idx_a_t = blockIdx.y * BLOCK_SIZE +
        blockIdx.x * A_T_BLOCK_STRIDE;

        // Global indices in A and A_t
        int glob_idx_a   = base_idx_a + threadIdx.x + a_width * threadIdx.y;
        int glob_idx_a_t = base_idx_a_t + threadIdx.x + a_height * threadIdx.y;

        /** why does the +1 offset make the kernel faster? **/
        __shared__ float A_shared[BLOCK_SIZE][BLOCK_SIZE+%(offset)d]; 

        // Store transposed submatrix to shared memory
        A_shared[threadIdx.y][threadIdx.x] = A[glob_idx_a];

        __syncthreads();

        // Write transposed submatrix to global memory
        A_t[glob_idx_a_t] = A_shared[threadIdx.x][threadIdx.y];
    }
    """% {"block_size": block_size, "offset": offset})

    kernel = mod.get_function("transpose")
    kernel.prepare("PPii", block=(block_size, block_size, 1))
    return kernel 


def transpose(tgt, src,offset):
    krnl = _get_transpose_kernel(offset)
    w, h = src.shape
    assert tgt.shape == (h, w)
    assert w % block_size == 0
    assert h % block_size == 0
    krnl.prepared_call((w / block_size, h /block_size), tgt.gpudata, src.gpudata, w, h)


def run_benchmark():
    from pycuda.curandom import rand
    print pycuda.autoinit.device.name()
    print "time\tGB/s\tsize\toffset\t"
    for offset in [0,1]:
        for size in [2048,2112]:
        
            source = rand((size, size), dtype=numpy.float32)
            target = gpuarray.empty((size, size), dtype=source.dtype)

            start = pycuda.driver.Event()
            stop = pycuda.driver.Event()

            warmup = 2
            for i in range(warmup):
                transpose(target, source,offset)

            pycuda.driver.Context.synchronize()
            start.record()

            count = 10          
            for i in range(count):
                transpose(target, source,offset)

            stop.record()
            stop.synchronize()

            elapsed_seconds = stop.time_since(start)*1e-3
            mem_bw = source.nbytes / elapsed_seconds * 2 * count /1024/1024/1024
            
            print "%6.4fs\t%6.4f\t%i\t%i" %(elapsed_seconds,mem_bw,size,offset)


run_benchmark()

Output

Quadro FX 580
time    GB/s    size    offset  
0.0802s 3.8949  2048    0
0.0829s 4.0105  2112    0
0.0651s 4.7984  2048    1
0.0595s 5.5816  2112    1

Code adopted

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

超可爱的懒熊 2024-12-05 23:00:10

答案是共享内存库冲突。您使用的 CUDA 硬件将共享内存排列为 16 个存储体,并且共享内存按顺序“条带化”到所有这 16 个存储体中。如果两个线程尝试同时访问同一存储体,则会发生冲突,并且必须对线程进行序列化。这就是您在这里看到的。通过将共享内存数组的步长扩展 1,您可以确保共享数组的连续行中的相同列索引位于不同的存储体上,从而消除了大多数可能的冲突。

这种现象(以及称为分区露营的相关全局内存现象)在“Optimizing Matrix Transpose in CUDA”论文中进行了深入讨论,该论文附带了 SDK 矩阵转置示例。这是非常值得一读的。

The answer is shared memory bank conflicts. The CUDA hardware you are using arranges shared memory into 16 banks, and shared memory is sequentially "striped" across all of those 16 banks. If two threads try and access the same bank simultaneously, a conflict occurs and the threads must be serialized. This is what you are seeing here. By extending the stride of the shared memory array by 1, you are ensuring that the same column indices in successive rows of the shared array are on different banks, which eliminates most of the possible conflicts.

This phenomena (and an associated global memory phenomena called partition camping) is discussed in great depth in the "Optimizing Matrix Transpose in CUDA" paper which ships with the SDK matrix transpose example. It is well worth reading.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文